From: "Kavitha Ranganathan" To: "Anda Iamnitchi" Sent: Tuesday, April 02, 2002 1:54 PM End-To-End Arguments in System Design & Rethinking the design of the Internet: The end to end arguments vs. the brave new world. Both papers present guidelines for system design. In the old paper, the authors vehemently advocate end-to-end design principles using the communication network as an example. The essence being that it is better to incorporate functions at the highest level possible in a layered design (closer to the application). The paper was only mildly convincing if at all, largely because the current internet scenario is so very different than what it was at the time the paper was written. Also, I didnot buy all their arguments. E.g. there could be a neat distiction between the checks in reliability provided by the network and those provided by the application: less repetition of essential checks. Nevertheless, the paper illustrated how difficult it can be to envison the direction technology progresses in and to predict the future design concerns for a system. The 2nd paper ( though 29 pages long !) was more ingrossing. The authors lay out the current internet environment and the roles for various players in it. The clear emphasis is on lack of trust and anonimity, which according to them forces one to rethink the singleminded adaptation of end-to-end design. It might well be neccesary to deviate for the original design of the internet and incorporate intelligence "in" the network. I for one agree with enabling the network with intelligence. The tricky part would be to still encourage and enable innovation at the user end-points and to maintain the balance of power between users, service providers and authorities like the government. Though the paper is convincing, there were times where it could have been more succinct. (Rating Old: 3, New: 4) From: "Xinghua Shi" To: "Adriana Iamnitchi" Cc: ; Sent: Monday, April 01, 2002 6:07 PM I should say first that I got entirely surprised when I read through all these four papers, especially "End-T0-End Arguments in System Design" which I read first. This paper states the End-To-End Arguemnt which is new to me. Some ideas are commonsense nowadays but it presents it in a systemtical way which I am unfamiliar with. This paper combined with "Rethining..." presents a global view of the system design. 1. End-T0-End Arguments in System Design Contribution: Provide a systematic way and rationale in system design. This argument prefers to put a function to the highest layer it can be placed, i.e. the "upward moving". It comes up with a gross guidance in system design especially in communication network implementation. Advantage: The paper comes up with several case studies which are appropriate to the use of End-To_End argument. From this viewpoint, those functions putting up in a system are disirable than putting them down. Disadvantage: End-to_End is only a guidance in system design. It's not absolutely correct in any situations. Actually, with the development of the Internet in the past 20 years, more and more applications and requirements arise and can't be solved by End-To_End argument singly. This gets full explanation in the second paper. Solutions: The second paper provises solutions to the above questions. What I want to say here is that, in the implementation of the Internet, some ideas in operationg systems can be introduced or at least considered. We can split functions into distinct layers according to their relative relations. Say something similar to we add another layer called micro-kernel to the traditional two layer of os: kenel and interface to applications. We divide the functions of a whole system into, say, n layers. The highest layer (nth) is close to the application while the 1st layer is close to the hardware. When a new function is added, we should evaluate it until we can put it in a ralatively appropriate level. When new applications are added, it is possible that the layer a certain function belongs to is changed too, so we should evaluate all the functions in current system and find a balanced solution. In all, the overall rule in system design is to find the tradeoff among all the functions in the system. 2.Rethinking of the design of the Internet Contribution: This paper is really an overall survey and analysis of the design of the Internet from a bunch of aspects including technical, economic, social and political considerations. In a word, it's the real-world Internet design at present. It gives a new insight into the world of Internet. As the world developing rapidly, a lot of requirements come out , resulting in much impact on internet performance and its design. Now, the orinal end-to-end argument can't work well sometime.The paper shows some adjustments such as : adjustment on end points; set some function in lower level( ISP , PICS and etc). Some technical analysis has been made and promising future is given. Advantages: This is a good document because it provides many suggestions in a wide range of the Internet design. Disadvantages: As this is a general discussion of the many topics related to the design of the Internet, it doesn't provide many details of specific design. So in some sense, this paper is too general. Date: Thu, 28 Mar 2002 17:15:48 -0600 (CST) From: "Jens-S. Voeckler" To: Anda Iamnitchi Subject: hw1 Hi, I read the first old one, and started on the 2nd new one. I have to admit that the first one was refreshing - being so off the target. As you can see, I am violently opposed to the end2end model, at least as described in the first paper - the new one is much better in this regard. The [old] paper tries to contribute a model or design pattern for sleak "ground" systems, not limited to networking or kernels. The claim is that only the app will know exactly how to correctly react to network mal behaviour. >From today's perspective, if one thinks in ISO/OSI layers, and their formalization, the initial claim [italics on p 278] can be contradicted. If one views a protocol layer as an endpoint, a function can be completely and correctly implemented in the layer. If one is to take the authors' claim to the extreme, it would mean that each application would need to carry its own version of, let's say TCP, with it. We all know that in this case, only few application can talk to each other, but we would never have seen such a broad interoperability as we currently see with TCP. Worse, while the networking layer is kept simple, the complexity is pushed repeatedly into each application that wants to make use of the network. Assuming an extreme approach, a standard library might have evolved, taking the same road as libc for kernels, with the same drawbacks to interoperability as todays system experience. Unfortunately, the authors take almost the whole paper to bring the point across. Even more unfortunate are their choice of examples and illustrations: [1] In section 2.1, they identify five threats to file transfers. But the first three threats have nothing to do with networked file transfers, and would also apply, if system A copied a file on its own disk. Throughout the document they refer to these threats, which are independent of network communications. [1.1] Corruption on disk is a task for the OS and hardware involved. If you want more security, spend more money. [1.2] Threat 2 is a systematic error, raving about bad programming practice and ill-tested software. Again, it has nothing to do with the end2end argument, but rather bad software engineering practice. Furthermore, moving this complexity out of the network layer into the application, applications are more likely to fail than vendor provided network layers. [1.3] If the authors are thus concerned about alpha particles flipping bits, they should be prepared to [a] to spend the money on hardware w/ correction etc, and [b] yes, for such applications, like a NASA shuttle computer, you will need extra precaution in the application layer, e.g. keeping an md5sum all the time for each object, and check it each time that the object state is accessed. [2] Page 279, one but last paragraph, they describe in a very detailled fashion a retry schema. Unfortunately, the paragraph shows that the authors are clearly only theoretics: While they develop a convoluted schema with little practical merit, they manage to obscure the issue. [3] Page 283, the authors propose to re-use a key for multiple hosts. This again sound like they don't know what they were talking about, and if they had not mentioned it, I would have given them more credit. In section 2.3 they juggle a probability and claim exponentialism. While statistics were sacrified by my math prof in favour of worthless things like schwartzian dual spaces, thus it is not my strong point, but I doubt exponentialism, and believe it is linear. In 1981, performance (and memory space) was definitely an issue, and bandwidth was expensive. They claim that a network layer that does all nice things will consume bandwidth for redundant data transmits and adds latency. By pushing this part of the complexity into an application, nothing is gained. Network bandwidth will still be lost due to retries, which this time come from the application. Of course, the auther mean that there might be more than one application, and some don't need all the bells and whistles. Today, we know that in a layered model, it is well to expose lower interfaces to applications that want to circumnavigate middleware, for performance purposes or other reasons. I cannot find any thought to that effect in the paper. In section 3.3, they argue about dup supression. While they never mention sequence numbers, they try to make a case for dup supression in the app layer. The only thing I can take from their argument is that dup suppression should not be done twice. Section 3.4 is even harder to understand. Their case is that a node can have multiple virtual circuits to many others, but ordering is not guaranteed. If the ordering is part of the VC, it is not a problem. If they are talking about multicasting, yes, it has to be handled on a higher level. Let me propose a counter example to their end2end thoughts: Looking at the IETF drafts and RFC folder, I see at least 3 or more proposals for reliable multicast transfers. Obviously, this is a common problem. Also, many apps solve it for themselves, limiting their degree of inter operability. If it had been solve once and for all, and put into the networking layer, we all could enjoy today a better multicast backbone. All in all, the paper is a nice summary of the problems that were mostly solved by TCP. Ciao, Dipl.-Ing. Jens-S. Vöckler (voeckler@cs.uchicago.edu) University of Chicago; Computer Science Department; Ryerson 155; 1100 East 58th Street; Chicago, IL 60637-1581; USA; +1 773 834 9170 Date: Thu, 28 Mar 2002 17:49:42 -0600 To: Adriana Iamnitchi From: Sudharshan Vazhkudai Subject: Class. Anda, just wrote down a few points that I thought were interesting... ciao tomorrow... Cheers, Sudharshan. -------------------------------- The first line in the conclusion, "End-to-end arguments are kind of Occam's razor", captures the essence of the paper in that it highlights that often times the simplest explanation is more apt -- i.e., being able to abstract out the bare minimal, absolutely essential features that goes in the core. Although, this is easier said than done. Striking a balance is often the most complex thing, I guess. The paper cites several examples of communication systems where this can be useful. The authors are quick to point out the relative merits of adding complexity to lower levels and concur that there should be some basic reliability, but absolute checks should be deferred for higher levels or end points. The most convincing argument the authors make is that, adding complexity to core modules might affect applications that do not always require them. This argument can possibly be extrapolated to various scenarios -- for instance, what needs to go in the Grid middleware...? how much is essential? how would we know what's enough?? (that to me is the problem -- as you cannot foresee the possible application domain, I guess...) The paper "Rethinking...", presents scenarios where end-to-end arguments might not always hold in today's Internet due to its explosive growth -- what with newer, stringent applications; lack of trust as distributed systems evolved from just a few scientists to "anyone how's anybody" can communicate; third party involvement (ISPs, gov agencies monitoring), multiway communication (web caching, replication, etc.) -- Bottom line being, today's communication is more complex and can end-to-end arguments still hold for today's world? --------------------------------- From: "Adriana Iamnitchi" To: Subject: 1st Date: Thu, 28 Mar 2002 22:58:39 -0600 Papers: [1] J. Saltzer, D. Reed, and D. Clark, End-to-end Arguments in System Design. ACM Transactions on Computer Systems, Vol. 2, No. 4, pp. 195-206, 1984. [2] D. Clark and M. Blumenthal, Rethinking the design of the Internet: The end to end arguments vs. the brave new world, Workshop on Policy Implications of End-to-End. December 1, 2001 a.. State the main contribution of the paper Both papers are position papers: they do not propose a new solution, but rather observe existing trends and comment on their implications. Reading these two papers together enhances, in my opinion, the message transmitted by each of the papers separately: e.g., [2] refers to a naivety that governed the early, elitist Internet society that is indeed showed by the '84 paper [1]. [1] claims that the end-to-end approach in system design is the one to be followed, as it provides generality and flexibility. It is most likely that this argument was correct, given today's popularity of the Internet. [2] shows that, despite the proven correctness of the end-to-end argument in Internet design, the "world" (the Internet community) changed, and hence the end-to-end argument is not anymore the only clear path to go. No solutions are provided, but a lot of interesting problems are raised. b.. Critique the main contribution 1.. It is difficult to quantify the contribution of these papers. I found the arguments made in [1] obvious, but I believe they weren't obvious before the Internet really took off. I found [2] an interesting overview of the many aspects that play a role in the success or failure of a new technology: social, economic, and political influences may shape a technology even against the technical arguments. It also highlights many interesting conflicting aspects of the same problem (anonymity vs. accountability, both are needed and can be dangerous) and I am curious to see how they'll turn out. Bets? 2.. [not applicable] 3.. [not applicable] c.. What are the three most striking weaknesses in the paper? Again, hard to tell. (who made this stupid questionnaire, anyway? :0) I would like to see some more detailed analysis (perhaps quantitative) of some of the influences other than technological, but that wasn't the goal of any of the papers -- and a 29-page paper is long enough, I'd say... From: Catalin Lucian Dumitrescu X-Sender: catalind@blue To: anda@cs.uchicago.edu Subject: paper evaluations I am late, but thanks a lot. catalin End-to-end arguments in system design The paper introduces the idea that decisions regarding the implementation of functions at low levels of a distributed system should be mainly guided by performance enhancements, and not as required features. The principle is called the "end to end argument", and, in addition, emphasis that such functions can be completely and correctly implemented only with the help of the involved application. Incomplete approaches should be devised to provide versions at the low level subsystems. Authors consider the case of two computers A and B that exchange data files and the function of correctness. In this two hosts case scenario for data movement, the two end points (let's consider ftp) must check not only for the correctness of the communication operation, but also for the correctness of other involved operations such as disk IO, or memory copying. Thus, the check function is anyway implemented at the high level. The implementation at the communication subsystem level willprovide only a transfer enhancement in the case of large files. Additional case scenarios for the end to end argument are presented: delivery guaratees, secure transmission of data, duplicate message suppression, guaranteeing FIFO message delivery or transaction management. All examples were collected over years, and are sustained by the experiences of the moment. While these examples sustain the principle, "Delivery guarantees" seems to me in contradiction with what TCP/IP (connected end-points) implements. There are many application implemented over TCP due to its simplicity in usage and independence. Kangoroo is another example that tries to eliberate higher level systems from lower level functionalities. IPsec is a contradiction for the "Secure transmission of data" (even I am not sure about its usage). The other approaches in achieving secure communications over untrusted networks (PGP and PKIX) sustain instead the "end to end principle". Policy based networking and low-level brokers start to gain more and more attention due to the necessity for bandwith guaratees to different applications. Another approach (sustaining the argument) is the active message exchange paradigm. Analysis and conclusions are most of the time sustained technological limitations of each given moment in time. Rating: 4 Convinging: yes Limitation: does not take in account technological advances and unexpected social and political problems (as outlined by the second paper) ------------------------------------------------------------------------------- Rethinking the design of the Internet: The end to end arguments vs. the brave new world The paper analysis the "end to end argument" from different view points: technological, social, political, and also by means of the experice and problems rised over years during Internet usage and evolution. It focuses on the initial design principles, on the new requirements that contradict with these initial principles, the technical and the larger social and political contexts that influence current trends and some of the possible solutions to adapt everything to the new and changing technical, but not only, world. It is a "long" evaluation of elements and forces that are pushing to change the Internet and its initial design goals. Each statment of the paper is well motivated. Next, I analyse one example from each chapter (related somehow to policy): a. Moving away from the end to end/More demanding applications: Applications like video-conferences demand increased realibility and resources management than the current "best effort" (currently, no guarantee about throughput is provided). There are already several solutions proposed, the one presented by the paper is caching/intermediate storages. The solutions I have in mind include network brokers and policy-based networking, that seem to gain increased attention. In addition, policy-based networking may provide a simple solution for network usage correctness (by introducing penalties for users that abuse on network usage - see Gnutella). b. Examples of requirements/One party tries to force interaction on another: Examples provided refer to spam and trojan horses. Even these elements are not really as harmeless as DOS attacks that in most of the cases cannot be stopped only by the victim site (its connection is flowded and its services are no longer available for "normal" usage). For such cases a policy based networking that enforce "fairness" usage is more ncessary. c. Technical/Adding function to the core: This is just a continuation of the previous point. d. Assesing where we are today/Rights and Responsabilities: This point is focused on the problem generated by different laws and ideologies rooted by different countries, as different companies impose different policies on their users. The current end to end approach gives users control over their communications, while a more centralized approach (functionality implemented in the core of the network) will provide the basis for more control. Several similar patterns are presented, and this paragraph provides many examples. Future work not mentioned is GRID computing! The paper is pretty long in comparison with the previous one. There are a few chapters (Introduction, Example of requirements, Technical responses, The larger context, Conclusions) with many sections. It was pretty difficult to follow all the details. Each chapter appears as a pretty individual survey of the problem. From: "Matei Ripeanu" To: Subject: CS347 - Rewiews (paper1) Date: Fri, 29 Mar 2002 00:41:44 -0600 End-to-end Arguments in System Design J.H. Slatzer, D.P. Reed, D.D. Clark The paper presents system design guidelines. The 'end-to-end' (e2e) argument recommends to place as much functionality as possible in the application level end-systems. The infrastructure used (to communicate) by these end-system should be as flexible and generic as possible. Advantages: resulting infrastructure is generic and will be easy to use for new, unforeseen applications. Most of the functionality one would like to push down to the infrastructure needs to be duplicated at the application level anyway (e.g. message transmission semantics). In some cases, also performance benefits. Authors mention though that, for performance reasons, it might make sense to break away from this the e2e principle. It is clear that the last 20 years have validated e2e design principle: compare the vibrant and dynamic Internet (designed according to the e2e principle) to the old telephone network (where the network is smart and controlling with dumb edge terminals) that has remained unchanged and has seen little innovative use for the past century.