Most embarrassing mistakes in my programming career (so far)

Most embarrassing mistakes in my programming career (so far)
As they say, if you are not ashamed of your old code, then you are not growing as a programmer - and I agree with this opinion. I started coding for fun over 40 years ago and professionally 30 years ago, so I've made a lot of mistakes. lots of. As a professor of computer science, I teach my students to learn from mistakes - their own, mine, others. I think it's time to talk about my mistakes, so as not to lose modesty. I hope they will be useful to someone.

Third place - C compiler from Microsoft

My high school teacher thought Romeo and Juliet wasn't a tragedy because the characters didn't have tragic guilt—they just acted stupid, like teenagers should. Then I did not agree with him, but now I see a rational grain in his opinion - especially in connection with programming.

By the time I finished my sophomore year at MIT, I was young and inexperienced, both in life and in programming. During the summer, I did an internship at Microsoft, on the C compiler team. At first, I did a routine like profiling support, and then I was entrusted with working on the most fun (in my opinion) part of the compiler - backend optimization. In particular, I had to improve the x86 code for branch statements.

Determined to write the best machine code for every possible case, I threw myself headlong into the pool. If the distribution density of values ​​was high, I entered them into jump table. If they had a common divisor, I used it to make the table denser (but only if the division could be done with bit shift). When all values ​​were powers of two, I ran another optimization. If the set of values ​​did not satisfy my conditions, I broke it into several optimized cases and used the already optimized code.

It was a nightmare. Years later, I was told that the programmer who inherited my code hated me.

Most embarrassing mistakes in my programming career (so far)

Lesson learned

As David Patterson and John Hennessy write in Computer Architecture and Computer System Design, one of the main principles of architecture and design is to make things run as fast as possible in general.

Speeding up common cases will improve performance more effectively than optimizing rare cases. Ironically, common cases are often simpler than rare ones. This logical advice implies that you know which case to consider as common - and this can only be done through careful testing and measurement.

In my defense, I tried to figure out what the branch statements looked like in practice (for example, how many branches existed and how the constants were allocated), but in 1988 this information was not available. However, I shouldn't have added special cases whenever the current compiler couldn't generate optimal code for my artificial example.

I needed to call an experienced developer and together with him think about what the general cases were and deal with them specifically. I would write less code, but it's even good. As Stack Overflow founder Jeff Atwood wrote, a programmer's worst enemy is the programmer himself:

I know you have the best of intentions, as do all of us. We create programs and love to write code. That's the way we are. We think that any problem can be solved with duct tape, a homemade crutch and a pinch of code. As painful as it is for coders to admit it, the best code is the code that doesn't exist. Each new line needs debugging and support, it needs to be understood. When adding new code, you should do so with reluctance and disgust, because all other options have been exhausted. Many programmers write too much code, making it our enemy.

If I had written simpler code that covered common cases, then it would have been much easier to update it if necessary. I left a mess that no one wanted to mess with.

Most embarrassing mistakes in my programming career (so far)

Runner-up: social media ads

When I was working at Google on social media ads (remember Myspace?), I wrote something like this in C++:

for (int i = 0; i < user->interests->length(); i++) {
  for (int j = 0; j < user->interests(i)->keywords.length(); j++) {
      keywords->add(user->interests(i)->keywords(i)) {
  }
}

Programmers will probably see the error right away: the last argument should be j, not i. Unit testing did not reveal a bug, and neither did my reviewer. There was a launch, and one night my code went to the server and collapsed all the computers in the data center.

Nothing terrible happened. Nothing broke for anyone, because before the global launch, the code was tested within the same data center. Except that SRE-engineers briefly stopped playing billiards and made a small rollback. The next morning I received an email with a crash dump, corrected the code and added unit tests that would have revealed the error. Since I followed the protocol - otherwise my code would simply fail to run - there were no other problems.

Most embarrassing mistakes in my programming career (so far)

Lesson learned

Many people believe that such a big mistake is sure to cost the person responsible for the dismissal, but this is not so: firstly, all programmers make mistakes, and secondly, they rarely make the same mistake twice.

In fact, I have a programmer friend who is a brilliant engineer who got fired for a single mistake. After that, he was hired by Google (and soon promoted) - he honestly spoke about the mistake he made in the interview, and it was not considered fatal.

That's what tell about Thomas Watson, the legendary head of IBM:

A government order worth about a million dollars was announced. IBM Corporation - or rather, Thomas Watson Sr. personally - really wanted to get it. Unfortunately, the sales representative was unable to do so, and IBM lost the bid. The next day, this employee came to Mr. Watson's office and placed an envelope on his desk. Mr. Watson did not even look into it - he was waiting for an employee and knew that this was a letter of resignation.

Watson asked what went wrong.

The sales representative spoke in detail about the course of the tender. He named the mistakes that could have been avoided. Finally, he said, “Mr. Watson, thank you for letting me explain myself. I know how much we needed this order. I know how important he was,” and got ready to leave.

Watson approached him at the door, looked into his eyes and returned the envelope with the words: “How can I let you go? I just invested a million dollars in your education.

I have a T-shirt that says: "If you really learn from mistakes, then I'm already a master." In fact, I'm a PhD when it comes to errors.

First Place: App Inventor API

Truly terrible mistakes affect a huge number of users, become public knowledge, are corrected for a long time and are made by those who could not make them. My biggest mistake meets all of these criteria.

The worse the better

I read essay by Richard Gabriel about this approach in the nineties, as a graduate student, and I like it so much that I ask my students about it. If you don't remember it well, refresh your memory, it's small. In this essay, the desire to “do it right” and the “worse is better” approach are contrasted in many ways, including simplicity.

Do it right: The design should be simple in implementation and interface. The simplicity of the interface is more important than the simplicity of the implementation.

The worse, the better: the design should be simple in implementation and interface. Ease of implementation is more important than simplicity of the interface.

Let's forget about it for a minute. Unfortunately, I forgot about it for many years.

Inventor App

While working at Google, I was part of the team Inventor App, an online drag-and-drop development environment for novice Android developers. It was 2009 and we were rushing to release the alpha version in time to give masterclasses for teachers over the summer who could use the environment to teach in the fall. I volunteered to implement the sprites, nostalgic for the way I used to write games on the TI-99/4. For those not in the know, a sprite is a XNUMXD graphical object that can move around and interact with other software elements. Examples of sprites are spaceships, asteroids, balloons, and rackets.

We've implemented an object-oriented App Inventor in Java, so it's just a bunch of objects. Since balls and sprites behave very similarly, I created an abstract sprite class with properties (fields) X, Y, Speed ​​(speed) and Heading (direction). They had the same methods for detecting collisions, bouncing off the screen border, and so on.

The main difference between a ball and a sprite is what exactly is drawn - a filled circle or a raster. Since I first implemented sprites, it made sense to specify the x- and y-coordinates of the upper left corner of the place where the image was located.

Most embarrassing mistakes in my programming career (so far)
Once the sprites were up and running, I figured I could implement balloon objects with very little code. The only problem was that I went the simplest way (from the point of view of the implementer), specifying the x- and y-coordinates of the upper left corner of the outline framing the ball.

Most embarrassing mistakes in my programming career (so far)
In fact, it was necessary to specify the x- and y-coordinates of the center of the circle, as every math textbook and any other source that mentions circles teaches.

Most embarrassing mistakes in my programming career (so far)
Unlike my past mistakes, this one affected not only my colleagues, but also millions of App Inventor users. Many of them were children or completely new to programming. They had to do a lot of redundant work on every application that had a ball. If I remember my other mistakes with laughter, then this one makes me sweat today.

I finally patched this bug only recently, ten years later. “Patched”, not “fixed”, because as Joshua Bloch says, APIs are eternal. Unable to make changes that would affect existing programs, we added the OriginAtCenter property with a value of false in older programs and true in all future ones. Users can ask a legitimate question, who even thought to place the reference point somewhere other than the center. To whom? One programmer who ten years ago was too lazy to create a normal API.

Lessons learned

When working on an API (something almost every programmer has to do sometimes), you should follow the best advice outlined in the Joshua Bloch video "How to create a good API and why it is so important" or in this short list:

  • An API can do you both great good and great harm. A good API creates loyal customers. The bad becomes your eternal nightmare.
  • Public APIs, like diamonds, are forever. Give it your all: there will be no other chance to do everything right.
  • API outlines should be concise - one page with signatures of classes and methods and descriptions, occupying no more than a line. This will allow you to easily restructure the API if it doesn't come out perfect the first time.
  • Write out use casesbefore implementing the API and even working on its specification. This way you avoid the implementation and specification of a completely non-functional API.

If I had written at least a small abstract with an artificial script, most likely I would have identified the error and would have corrected it. If not, then one of my colleagues would definitely do it. Any decision that has far-reaching consequences needs to be considered for at least a day (this applies not only to programming).

The title of Richard Gabriel's essay, "The Worse, the Better," refers to the advantage that gets to the one who first entered the market - even with an imperfect product - while someone else chases the ideal for ages. Thinking about the sprite code, I realize that I didn't even have to write more code to get it right. Whatever one may say, I was grossly mistaken.

Conclusion

Programmers make mistakes every day, whether it's writing buggy code or not wanting to try something that will improve their skill and productivity. Of course, you can be a programmer without making such serious mistakes as I did. But it is impossible to become a good programmer without realizing your mistakes and without learning from them.

I run into students all the time who feel like they make too many mistakes and that's why they're not made for programming. I know how common the impostor syndrome is in IT. I hope you will learn the lessons I have listed - but remember the main one: each of us makes mistakes - shameful, funny, terrible. I will be surprised and upset if in the future I do not have enough material to continue the article.

Source: habr.com

Add a comment