Tuesday, December 10, 2013

A promotion you would never want

On fateless morning of 21th November I lost my Dad. He was very sick when we admitted him in a Delhi Hospital. He was immediately admitted to ICU. I was pretty confident that its a matter of days when we will go back with him. But on the fourth day, I was told that he is no more.

I was not sure what to do. My cousin brothers were with me and together we took care of all the things in Delhi and later in my native Ramnagar. Lots of relatives and friends came to pay their condolences. Most of the talks were that me and my big brother now have bigger responsibilities.

I was just looking at this like a promotion which I never wanted.

Sunday, November 10, 2013

One tip on C macros

I am not a computer science graduate so when I started my career I knew only two languages Hindi and English. I later moved into IT industry as almost every other guy was doing. I went through a crash course on 'C' and before I could learn any other language, my company decided to move me into a client's project. I usually joke when I say I now know three language Hindi, English and 'C' but that is a fact. Since then I have worked on some more computer languages like C++, Perl and Python but I have always felt more comfortable with 'C'.

As I mentioned I learned the basic of 'C' in a 5 days crash course and most of what I now know, I learned from my peers of my various projects. During this time, I first learned what is the right way of writing 'C' macros and later found the explanation of why that is the right way. I decided to share some of those with you in multiple post. So here is the first one.

Let us assume you created a macro as follows:

#define  MULTIPLY_BY_TEN( x, y )                x = y * 10

Let us assume the usage of this macro is as mentioned below.

multiply_by_ten_function( one_time )
{
     unsigned long      ten_times;

     MULTIPLY_BY_TEN( ten_times, one_time );

     return ten_times;
}

Do you see anything wrong in this? No. I also don't see anything wrong. Actually in this usage, there is nothing wrong. But what do you think about the below usage?

multiply_by_ten_function( one_time, two_time )
{
     unsigned long      ten_times;

     MULTIPLY_BY_TEN( ten_times, one_time + two_time );

     return ten_times;
}

Yes. This will have a problem. Why? Because when the macro is expanded it will be :

multiply_by_ten_function( one_time, two_time )
{
     unsigned long      ten_times;

     ten_times = one_time + two_time * 10;

     return ten_times;
}

Instead of the addition of 'one_time' and 'two_time' first, it will multiply two_time with 10 and then add one_time to the result. So how do we fix such a thing? Very simple. We always wrap the arguments of a macro with '()'. Now let us look at the macro again:

#define  MULTIPLY_BY_TEN( x, y )                (x) = (y) * 10

Now this is how it will look after expanding:

multiply_by_ten_function( one_time, two_time )
{
     unsigned long      ten_times;

     (ten_times) = (one_time + two_time) * 10;

     return ten_times;
}

And you will get exactly what you expected. So I hope you will change your habit of not wrapping the arguments of macro with '()'. Please don't believe me and do try this out by writing a small 'C' code to validate the above.

I will put up more such tips in future post. If you want me to write about anything in particular, please leave me a comment.

 

Tuesday, October 22, 2013

Fast path and Slow path

Long back I debugged a problem where a video client was receiving some of the packets of a video stream out of order. During debugging, we found that some of the packets of the video stream were taking Fast Path while some of them were taking Slow Path.

So what exactly is Fast Path and Slow Path?

Before I get into that, let me tell you a little bit about router architecture. Routers usually have a general purpose processor (similar to CPU used in PCs) which is used for handling the control plane requirements like configuration of the device, networking protocols and packets destined to the router. In early routers, the same general purpose processor was used for forwarding the data packets as well. So as you can imagine the data forwarding was really slow.

Later as technology evolved, our routers needed to forward data packets with higher speed and a general purpose processor could not do this. So it was decided to move data forwarding part to a special purpose processor, usually known as Network Processor. Network Processor's sole responsibility is to forward the data packets. So current breed of routers use a general purpose processor for control plane requirements and a Network Processor to forward data packets at wire speed.

Now when a packet is forwarded at wire speed with minimal latency, the packet is said to be taking Fast Path. Why is this called Fast Path? Because a data packet is forwarded at the fastest speed. So who is forwarding this data traffic? What did you say? Network Processor. Yeah! You got this right. Now with this background, I am sure you can figure out which path is Slow Path. If you said, that when the packet is forwarded by the general purpose processor, I would say, Congratulations, you got this also right.

Someone might ask if Network Processor is suppose to forward data traffic when does general purpose processor forward the data traffic. A good question. It can happen in the following cases:
  1. When ARP needs to be resolved for a gateway, Network Processor sends all the data packets to the control plane. Once ARP is resolved, control plane forwards these packets. Subsequent packets will be forwarded by Network Processor.
  2. There could be an issue because of which Network Processor might not have enough information to forward a packet. In such a case, Network Processor sends these packets to control plane and control plane may forward it.
So the issue I was debugging was seen because of the 2nd case.

Please let me know if you have come across an issue where data traffic was taking Slow Path.

Friday, September 27, 2013

Writing Technical Blogs in routerfreak.com

When I started this blog, I had no idea what I am getting into. I just had this vision that I will write my blog posts in a simple language so that more people can read, understand and comment and we all learn in the process. But one of the immediate challenge was to popularize the blog so that more people reads my posts. I did following:
  1. I shared my blog through mails with my friends and family. 
  2. I updated my blog in Linked-In and shared every blog post in Linked-In.
  3. As suggested by one of my colleague, I created a twitter account for this blog to get to wider audience. I started following some of my friends and collegues so that they can retweet my tweets to get my wider audience.
Even after doing the above, I find traffic to my blog was not as much as I wanted it to be. I thought that I should write guest posts for some popular blogs as a guest editor to get to wider audience.

When I searched through Internet, I found couple of good technical blogs. Most of these blogs are Network Administrator oriented. One such blog is Router Freak and it provides tools, tips, review etc. for Network Engineers. Router Freak also provide oppourtunity to other people to write for them. I contacted them through their web-site. They very kindly agreed for me to write for them.

I wrote my first blog post on IP TTL Security on 8th August. I followed it up with SSM and ASM on 2nd September. Please do read these two blog post and send me any comments you may have.

As I was showing enthusiasm in writing blog post and I think Router Freak liked the two blog post I wrote for them, we decided to get into an agreement. We discussed and decided that I shall write Technical Contents realted to Networking only for Router Freak.

I think its a win-win situation for both of us. I get the wider audience for my posts and Router Freak hopefully gets the good technical content. I request you to please subscribe to Router Freak's RSS feed or for Network Engineer's Newsletter or both so that you can be notified about new blog posts.

Sunday, September 15, 2013

Contributing in IETF : where to start?

In one of my my earlier post 'India in IETF', I described India's contribution in IETF. In that post, I asked you to provide comments and suggestions on how we can improve the quality and quantity of India's contribution in IETF.

In multiple informal discussions, different people have pointed out that while they would like to contribute but they do not know how and where to start. So I decided to write this post to provide some information on the same.

First thing first. Do you know how IETF works? If not then you need to read through my previous post here on the same topic.

If you are reading this, I will assume that you know about the Areas and Working Groups in IETF. You need to identify the Working Group which interests you. It could be related to what you are currently working on or what you would be working on in near future or what you want to learn to expand your ever-growing knowledge. You can be interested in multiple Working Group at the same time.

So what next?

You go ahead and subscribe to the mailing list of each Working Group you are interested in. This is because most of the IETF technical work happens in mailing lists. How do you join a Working Group mailing list? You just need to send a mail to the Working Group mailing list with  'subscribe' in the subject line and body of the mail.

Now that you have joined the mailing list, you will start receiving mails sent by others. Read them to understand the discussion and people's point of view. If you feel like asking a question in any of the discussion, feel free to do that.

Scan all the Internet Drafts submitted to a working group. Pick one that interests you the most and read through it. While reading through it, take your time to understand the problem it is solving as well as the proposed solution. Note down all the questions you could not answer yourself. Also note down editorial changes like sentences which you think are not clear enough, incorrect spelling etc.. At the end of the reading if you think you have some questions and suggestions on a draft, do send it to the mailing list. One of the author will acknowledge your suggestions and will respond to them. In some cases, you will see that apart from authors other people also pitch in with their view on your suggestions. Finally all your suggestions will be addressed by authors. You can take a pat on your back because you have just helped enhancing the quality of a draft. Do this exercise with all the documents in the Working Group.

Please do not think that any question or suggestion you might have on a draft is a stupid one. You will know its worth only when you send it out on the mailing list. Also please do not get discouraged if nobody responds to your questions. Mostly this does not happen but if it does, do send your mail again. Also if your mail has some valid points, someone will surely reply.

What else?

You come across a problem which your customers are facing or a problem you see in a protocol you are working with or a problem you found when you were testing a protocol in a network configuration. You think that a solution to this problem does not exist.  You discuss this problem among your colleagues and come up with a solution. You are thinking that this solution will help in deployment. In such a case, you should send out a mail describing the problem on the relevant Working Group's mailing list. If others agree that such a problem exist, you should create a draft and submit it. You should then ask Working Group to review your draft. It is good to attend an IETF meeting to discuss this in the Working Group meeting as well as in the corridors of the meeting venue with as many people as possible. If the problem you are solving is a real problem, your draft will be accepted by the Working Group. It will then go through multiple rounds of review and will later become a RFC.

Just one suggestion from me. Please be patient when you have sent a mail on a mailing list. People might take some time to reply as they will be busy with their day-to-day work. But let me assure you that people do reply.

So what are you waiting for, go ahead and start contributing to make Internet better. Best of Luck!

Saturday, August 31, 2013

Defects : Can they be avoided?

If you happen to be working in the IT industry, I can bet that you would have come across the word 'Defect' (or other popularly used word 'Bug') almost on daily basis. A 'defect' is nothing but a case of something not working as expected.

You can see defects usually in all walks of life. There are defects in the house you live, bus you take, car you drive, laptop you use. You would have seen various manufacturers recalling their products because they want to fix a defect.

Why one should worry about defects? Because defects are usually costly. Each defect has a cost associated to it. Cost of each defect depends upon which phase it got introduced and which phase it got caught. A defect that was introduced in design phase and caught in deployment phase will be the most costly defect. In the worst case, a defect could be life threatening. For example, a small defect brought down the space shuttle Columbia where all the astronauts were killed.

The big question is why and how defects get introduced?

In any product development, there are various stages. Let us take an example of software product as I am more familiar with this. We can divide the software product development in various stages like requirement collection, high level design, low level design, user-interface design, implementation, testing, deployment etc.. Defects get introduced in almost every stage.

Someone may ask, how come defects gets introduced in the testing phase. Actually testing is suppose to figure out all defects but as we know by experience that it is not the case. So this means that there is some defect in our testing because of which we are not able to detect all defects.

When you designs your product, it is almost impossible to test the design for all possible inputs. So there will always be some inputs for which you cannot test your design or implementation. So theoretically, there is a possibility that your design may fail for some such input. If I extend the above, we can safely say that in theory, it is almost impossible to avoid defects. Similarly, a developer without required level of skills, implementing the desgin will introduce more defects.

In my opinion, as mentioned above, in a completely new software, defects are introduced in all the stages. Once defects are seen, they are fixed. This is when more defects are introduced in the software. So this becomes a unavoidable cycle.

Why defects are introduced when fixing a defect? There could be various reasons like the developer does not know the big picture or there is not enough documentation about the implementation or there is no review process or the developer's skills are not good enough. In my opinion, review process can help in most of these cases. For example: even if the developer fixing the defect does not know the big picture, the chances of getting defects can be reduced if an expert review the fix. In one of my earlier post, I had talked about the necessity of code review. In that post, I argued in favour of code review which helps in reducing the defects in the system.

So defects are costly and seems unavoidable but there are means and processes with which we can reduce the occurrence of defects. I am a big supporter of review process. In my opinion, number of defects in the system can be reduced if there are reviews in each stage of development. Also, I think there must be more focus on better reviews in the preliminary stages like design. What do you think about this? It would be great if you can share the particular mean or process that has really helped you in reducing defects.

Sunday, August 18, 2013

Multicast in Ethernet LANs

Now that we know why do we need multicast and how multicast addressing works, let us look at how Multicast works in Ethernet LANs.

As described in my earlier post on multicast addressing, Applications/Servers use a Multicast Address (Class D address) while sending out the contents. This multicast address is used as the destination address in the IP data packet.

Please take a look at figure 1, what do you think who all will receive the multicast traffic sent by the sender?

Did you say 'all-of-them'? If yes, you got it wrong. To tell you the truth, none of them will receive the multicast traffic as they did not indicated their interest beforehand.

So how do the receivers indicate their interest? They use a simple protocol known as IGMP (Internet Group Management Protocol). I will talk about how IGMP works in detail in a future post.  For the time being, just remember that a receiver uses IGMP membership report messages to indicate his interest in receiving multicast traffic for a specific Group Address.

Now look at figures 2 and tell me who all will receive the multicast traffic sent by the sender?


Did you say R1 and R2? If yes, you got it right. Congratulations!

With the figures, can you figure out what difference does the IGMP membership report messages made in this network so that R1 and R2 can now receive the multicast traffic? Actually speaking, nothing, IGMP messages do not change anything. So then question is, how does R1 and R2 can now receive multicast traffic by just sending IGMP messages? Answer lies in what Hosts does when they send IGMP messages?

I am sure most of you already know, how a hosts accept traffic for himself on a LAN. But let me explain it in my terms. Hosts uses Ethernet NICs to send/receive traffic in LANs. Every NIC card is configured with a unique 48 bits MAC address. When one hosts (or any network element) communicate with other hosts, it figure out the destination host's MAC address using ARP. It then prepare an Ethernet Packet with that MAC address as destination MAC and sends the packet on LAN. All Hosts NIC card receive this packet and compare the MAC address with their own MAC address and only that hosts whose MAC address matches, accept the packet. Look at figure - 3. Destination MAC in ethernet frame is of R1 so only R1 accept the packet.

Now there are two things that comes out of this. One is that what MAC address should the sender use when sender is sending a Multicast packet. Another is what should the receivers do to accept the multicast packet?

To keep things simple, an IP multicast address is mapped to an Ethernet MAC address. Here is how the mapping works. Out of 48 bits, first 25 bits are a constant (01 00 5E 0) while rest of the 23 bits are same as the lower 23 bits of the multicast IP address. For example: 224.1.1.1 will map to 01 00 5E 01 01 01.

Do you see a potential issue with this mapping? If you do, take a pat on the back. With this mapping scheme, two different Multicast IP address can map to the same MAC address. If you try to map IP address 224.1.1.1 and 235.1.1.1, they both will map to the same MAC address 01 00 5E 01 01 01. This happens because out of 28 unique bits of multicast address, only 23 bits are used in mapping. Take a look at figure - 4 to see how the mapping works.


Your next question might be that why we ended up with such a scheme. Why did not someone got it right? Actually there is a small story behind this. Steve Deering requested a block of MAC address to be reserved for IP multicast use such that he could create a unique MAC address for each multicast address. But he was allocated a smaller block which had  first 25 bits fixed and we ended up with this scheme.

Now that we know how a multicast address is mapped to Ethernet MAC address, let us see how the multicast communication works. When a receiver indicates its interest in receiving multicast traffic for a given group, apart from sending an IGMP membership report, receiver configure their NIC cards with multicast MAC address calculated based on the mapping of the multicast group address. It means if receivers is interested in multicast group address 224.1.1.1, it configures the NIC card to receive traffic with 01 00 5E 01 01 01. Because of this, any data traffic received with this multicast MAC address is accepted by the receiver. Similarly a sender populates the destination MAC address by calculate the multicast MAC address based on the mapping of the multicast group address. Please look at figure - 5 to understand what I am suggesting.


This is the first post where I have tried to add as many figures as possible. Please do let me know how you like this post in comparison to others where I avoided figures completely. I will leave you with a question now. If multiple multicast group address can map to single MAC address, how does the end hosts filter out packets he is not interested in?

Sunday, August 11, 2013

How IETF works?

I illustrated India's contribution in IETF in my earlier 'India in IETF' post and ask you to provide comments and suggestions on how we can improve the quality and quantity of India's contribution in IETF.

In the past, during various informal discussions, people mentioned that they are not aware of how IETF works. I always pointed them to relevant links but I am sure not many have read through them. So I decided to give a brief run down on how IETF works in my own words. Please note that it is like a small crash course on this subject and if you want to get more information, please do go through the links posted at the end of this post.

IETF is a standard body that produces highly relevant technical documents which are used by people to design, develop, use and manage the Internet. IETF has organised its work in different Areas. At present, there are 8 active Areas in IETF. There are several Working Groups in each of these Areas. Each Working Group has a pre-defined charter that identifies the goals and milestones for the working group. There are currently more than 100 Working Groups in IETF. I will talk about the process of creating a Working Group in some future post.

Each Area is managed by one or more Area Directors (ADs) with the help of IESG (Internet Engineering Steering Group). IESG members are Area Directors who are selected by Nomination Committee. Each Working Group is managed by one or more Working Group Chair.

Most of the IETF work happens through mailing list. Each Working Group (WG) has its own mailing list. Anybody can subscribe to any Working Group's mailing lists for free and start contributing.

IETF produces RFCs (Request For Comments) which are implemented by various OEM vendors in their products to make Internet work. Authors write up a private draft which describe a specific problem and the proposed solution. Authors then submit their draft to the most relevant Working Group. Authors can post their draft to other relevant Working Groups as well. For example: A draft can be submitted to both PIM WG as well as L3VPN WG if the draft is related to both the Working Group. Working Group reviews all the proposals and accept those proposals on which Working Group like to work. These proposals are then submitted as IETF Internet Drafts. These drafts then go through extensive reviews in the Working Group. Once Working Group feels that a specific draft has gone through enough reviews, Working Group chair submit the draft to IESG. The draft goes through another review by IESG members and once it is approved by IESG, it is passed to RFC editor to convert it into a RFC.

The easiest way a person can contribute to a Working Group is by reviewing the various Internet Drafts submitted to that Working Group. Again, as mentioned above, its free. You just need to subscribe to the mailing list. This helps everybody as it enhances the quality of the drafts and in turn the output of IETF.

There are regular IETF face-to-face meetings which are organised three times a year. Two of these meetings usually happen in USA/Canada region while one meeting happens somewhere in rest-of-the-world. These meetings are as technical as you can think of. Each Working Group meets to discuss their on-going work. In these meetings, people discuss their current and future work, find co-authors, get feedback on their work and network with people.

For more information, please refer to the TAO and page setup for newcomers in IETF.

Monday, August 5, 2013

Multicast addressing : Whats the big deal?

In my previous, I tried to reason why IP multicast is required and how does it help in the network. In this post, I will focus on multicast addressing.

IP multicast uses Class D address space. First address is 224.0.0.0 and last address is 239.255.255.255. This range is also referred as 224/4.

Similar to unicast address space, some of the multicast address ranges are special. 224.0.0.0/24 are known as local multicast addresses. Packets destined to these addresses are not forwarded by a router and treated as broadcast packets by switches. Addresses in this range is used by various protocols like RIP, OSPF etc. to communicate with immediate neighbors. 232/8 are used only for Source Specific Multicast. I will talk about different types and how they are used in greater details in some future post.

IP multicast is built upon the assumption that senders and receivers do not need to know where the other is sitting in a network. They just need to know the multicast address that they need to use. Receivers shows its interest by sending out an IGMP/MLD join request with the multicast address they are interested in while senders just sends multicast data using the multicast address.

So how does an application finds out which IP multicast address to use? Very simple. Pick any multicast address and start sending data using that address. Am I kidding? No. This is how it is suppose to be.  But then how do receivers suppose to know which multicast address sender is using? Some out-of-band mechanism usually manual configuration.

But should not there be some means for sender to decide on a multicast address so that it is not in conflict with others. To start with, IP Multicast did not had any allocation mechanism. People came up with various mechanism to allocate multicast addresses like MALLOC but nothing was widely deployed. Later, some mechanisms like GLOP addressing, SSM address space, Administrative scope address range etc. were defined to help administrator with a range of multicast group address they can use within their network or globally. So usually sender is configured with a multicast address that it should use.

Now a days, Applications uses an address from administrative scoped address range if multicast traffic is not suppose to go out of its Autonomous System (AS). Application can use an address from the GLOP multicast address range or SSM address range as it will be unique globally. Receivers are told about this address through out-of-band mechanisms like passing it back in a http response.

If you want a consolidated information on Multicast address architecture, please read through RFC 6308. A whole list of the special multicast addresses can be found in Wikipedia. IANA maintains the IP multicast address space registry here.

Monday, July 29, 2013

Porting code : Does it help?

You work in a project that manages (create, enhance and maintain) a large software product. Your manager comes to you and ask you to add a new software feature for this product. He has given you complete freedom in deciding on how to proceed. You know that the same software feature is already available in another OS as open source with favourable licensing or from a company as closed source. You can port it instead of implementing it yourself. So the critical question here is should you go ahead and port the code or implement this yourself?

In last few years, I have ported couple of large features while reviewed few large features that were ported by others. There were few very specific reasons why we decided to port a specific feature (I think most of you who decides to port a particular code will be having something similar):
  1. Porting the feature will save ample amount of time and in turn money.
  2. We will get a well tested code and we will be bringing in less number of bugs into our code base (which we know is a better code).
  3. Community (or the company if you have bought the code base) will provide bug fixes and enhancements which can be ported easily so we as such do not need to worry about them.
Now when I look back at some of these ported features, I feel that it would have been better to implement them ourselves instead of porting them. Here are the reasons:
  1. In at least one of the cases, the code we ported was not modular enough and did not gel well with our code base. We end up spending lots of time in creating or extending existing APIs in the ported code.
  2. In at least one of the cases, we needed to port only one part of the feature but we had to port the whole code and later we realized that there is no way to just compile that part of the feature. It took us lots of time to finally get what we wanted.
  3. One of the feature had interaction with operating system and management systems like CLI. These interactions have been standardize in our product. It took us some efforts to make ported code use these standardized interfaces instead of what it was using before.
  4. In couple of cases, we decided to own the code after porting as we had made changes so we could not rely solely on bug fixes from the actual source.
  5. As the code was not writing internally, none of us knew the whole code which brought in unknown to our code base. If any bug is seen, there was always a doubt that it is coming from the ported code. In any case, everybody thinks that whatever they have written is almost a bug-free code.
Please note that I am not saying porting is bad. What I am trying to suggest is that do not decide to port just because the code is available. Look at the following things before you make a decision:
  1. Is the code modular and structured enough that you can integrate it with your software without creating too many new interfaces?
  2. Are you looking to port only part of the code? Is this part big enough to justify porting?
  3. Is it possible to port the code without making too many changes in the code? This means if you are making too many changes then you may not be able to take bug fixes and enhancements just like a patch. Also in such a case, you must look at the whole code and after porting, you have a similar confidence as if you have written it yourself.
  4. Can your management system support be added in the ported code easily?
  5. Is the code you are porting is readable and maintainable? Can your team be able to maintain this without the outside help?
I am sure a lot of you would have done this in the past and will do it in future as everybody today talks about re-usability. But my suggestion is that before you actually take the plunge, do ask yourself, will this porting really help?

Also some of you would have done some porting in your software engineer life so far, please do share your experience, views and opinions on this.

Wednesday, July 24, 2013

Why IP Multicast - My take

When I started my career, I did an initiative on IP multicast and since then I have been working on this technology. There are various articles and blogs in the net that defines IP multicast. I decided to attempt to provide simple explanation. I hope you will let me know if I have succeeded or not. So here we go.

When a PC wants to communicate with a specific PC, it uses Unicast. This is similar to you writing a letter and posting it to your father.

When a PC wants to communicate with all the PCs in a network, it uses Broadcast. This is similar to you writing a letter and a copy is sent to all the people.

Multicast is somewhat in the middle. When a PC wants to communicate with a specific set of PCs in a network, it uses Multicast. This set may contain all the PCs in the network.This is similar to you writing a letter and a copy is sent to only sepecific set of people who want to receive your letter.

I am sure after reading this, the next question is why do we need Multicast? Why not use Broadcast itself? I completely agree. We can use Broadcast instead of Multicast. But is not there a catch? Yes, there surely is. Otherwise people would not have spent time, energy and lots of Rupees in creating Multicast. Here are the two which I could think of:
  1. To receive Multicast traffic, a receiver specifically indicates that it wants to receive the traffic. So CPU of a receiver will not see the Multicast traffic if it has not wished to receive it. While in case of Broadcast traffic, receivers of all CPU
  2. There are only two broadcast addresses and only one can be routed. While there are 2^28 or 268,435,456 Multicast addresses to choose from and except 255 ( 224.0.0.0 to 224.0.0.255) multicast addresses, all others can be routed. So applications can choose from a wider range of addresses in case of Multicast.
Now we know Broadcast cannot be used and we need something special like Multicast. To start with, Multicast was designed to help in one-to-many communication. Later it was extended to support many-to-many communication. In multicast, sender sends out one single packet and routers forward them towards the receiver. A router replicates a packet if it has receiver on multiple interfaces and send one packet towards each receiver. I will talk about Protocols used in IP multicast in future posts.

So how actually Multicast helps in a network? Here are the most visible aspects:
  1. A server needs to send out only one copy of the packet if Multicast is used. If unicast would have been used instead, server would have to send out as many copies as the number of receiver. Right.
  2. Network bandwidth is saved. Because server sends only one copy out instead of as many as the number of receiver.
  3. As the overall number of packets in the network is reduced, it also helps in avoiding network congestion.
I did not use any network topology to put across my points. It could be a problem but I feel that it will be more satisfactory if you can visualize the whole thing with your strong visualizing capabilities.

I shall be talking more about IP multicast in upcoming posts so stay tuned.

Sunday, July 14, 2013

India in IETF

How has India faired in IETF (Internet Engineering Task Force)?

Some people may not be aware of what IETF is?

IETF is a standard body whose goal is to make the Internet work better. 

I have been participating in IETF on and off for couple of years. I was interested in finding out contribution trends in IETF. It was more to find out which company and which country is contributing to IETF. As part of this, I looked at India's contribution to IETF. When I say India's contribution, I am refering to contribution from people working in India.

In last couple of years, while I have seen an exponential increase in the contribution from India, I think it is still far from 'our best effort'. Let me share some statistics to prove my point:

  1. Out of around 60 countries who has contributed in IETF, India's contribution keep us at 13th place.
  2. Out of around 53 countries currently active in IETF, India comes at 9th place.
  3. Out of around 9000 documents in IETF, only 180 documents (around 1.95%) has come from India.
  4. Out of around 6700 RFCs created till now, only 72 RFCs (around 1.07%) has come from India.
  5. Out of around 15700 authors in IETF, only 67 authors has come from India.
  6. Out of around 100 working groups in IETF, authors from India are active only in 29 working groups.
  7. As far as I know, till now, there has never been anybody from India who has been a working group chair or Area Director in IETF. 
  8. India has never hosted an IETF meeting.

I agree that these statisitcs do not paint a rosy picture. To bring out the relative performance of India, I decided to see this data against some other country and choose China. I choose China for a very specific reason as China and India, both started contributing in IETF around the same time. Let us look at similar numbers for China:

  1. Out of around 60 countries, China is at 4th place.
  2. Out of around 53 countries currently active in IETF, China is at 2nd place.
  3. Out of around 9000 documents in IETF, 583 documents (6.30%) has come from China.
  4. Out of around 6700 RFCs created till now, 127 RFCs (1.88%) has come from China.
  5. Out of around 15700 authors in IETF, only 337 authors has come from China.
  6. Out of around 100 working groups in IETF, authors from China are active in more than 50 working groups.
  7. China has hosted an IETF meeting. 

So based on the numbers, its clear that China's contribution in IETF is growing with far better rate than ours.

I just wanted to point out our (India) dismal standalone as well as relative performance in one of the leading standardize body (IETF).

In some future post, I will discuss what we, as individuals, as companies and as a country, should do to improve the quality and quantity of this participation. I would request people to think and post their suggestions on the same.

Credit:
All the IETF data mentioned in this post has been fetched from IETF statistics web page..

Thursday, July 4, 2013

Code review : A necessary evil


Do we really need code review? Ideally speaking, we don't. But idealism is seldom practical. If people write their code properly without any bugs, there is no need for code review. But is this humanly possible? Naah!

I have talked to my friends who are working with different OEM companies about code review in their projects. Most of the responses fall under three categories:
  1. There is no code review as everybody is pretty good with the code. Also, a lot of regression testing takes care of the bugs.
  2. There is code review but not many people takes it seriously.
  3. There is code review and everybody takes it very seriously. For bigger code changes, multiple reviewers review the code.
I have been part of multiple projects where we maintained larges software for various clients and we have always been in the third category. So I think I am not qualified to outline the advantages of no code review but let us look at what does a strict code review brings to the table:
  1. A reviewer is a different person. He thinks about code differently. So he brings in a different perspective when looking at someone else's code. He can look at the modularity, scalability and maintainability of the code as a neutral person. Believe me this has really helped me (and our projects) a lot.
  2. Developers are usually in hurry. They need to finish the current task at hand and move to the next task. A code review helps in establishing that a feature has been implemented as stated in the requirements.
  3. Developers are usually bad unit tester. During unit testing, they tend to miss the negative test cases or specific scenario test cases. A reviewer during the code review can identify some of these negative and specific scenarios for the developer to test. This increase the overall reliability of the feature.
  4. If you identify the code reviewer smartly, you can use this exercise to train new people in code as well as code review.
All these reasoning looks pretty good but are there any side-effects? Yes, there are.
  1. I have witnessed first hand that a some review comments late in the review cycle can create havoc. Developer tries to carry out the change fast as he needs to commit his code and usually forget to handle some special cases which he had taken care in the first phase.
  2. A similar situation comes when code review process gets delayed for some reasons. In couple of weeks, developer may not remember why a code is written the way it is written. So when a comment is given in that part of the code, developer is not sure what to do and might end up breaking his own code/logic which was most probably correct.
  3. A developer will have to allocate 20% of total development time for review.
There seems to be both pros and cons of doing code review. So what is my take?

I think I will go with the title, 'Code Review is a necessary evil'.

To learn something more on this topic, I would request people who have followed 'No code review' policy to please mention the advantages and disadvantages they see with 'No code review' policy. May be then, we can debate on what makes more sense.

Monday, July 1, 2013

Reasons to start this blog

I am not sure what to write in the first blog entry when you start a blog. You see, I am doing this for the first time. If I ever create another blog, may be, I will have the required experience to write something different.

I had been thinking to start writing a technical blog on Data Communication domain focusing on IP routing and switching, IP Multicast and what it takes to develop routers/switches. This has been in my mind for quiet some time now (around couple of years to give you some clue :( ). I hope I am not too late.

During a discussion last week, one of my colleague mentioned that people who take actions only succeeds. So here I am. Writing first entry for my blog.

I was thinking about the reasons why I would like to start a blog when some similar blogs are already there. So here are the list of reasons:
  1. Its being written by me. So anyway there is a difference.
  2. I think that using this blog, I can help myself and others who are trying to figure out things in Data Communication domain. I will come back and check on this in a year time.
  3. While I can get opinion from some people on some topics but I thought a blog may be the best way to put my views about various technology topics. I think this medium will also help me find are others who have some say on the same.
  4. I think I can use this to connect-in people with similar background, experience.
  5. Last but not the least, it will help in improving my english writing capabilities and presentation skills as well.
I am sure I will figure out whether I am able to achieve what I set out to in couple of months time. Till then, please help me by commenting, sharing and criticizing my posts as much as you can.

Please say 'Best of luck Bharat'.