arei.net ⇒ code
Gun To The Head Interviewing

Consider this… You are a smart, seasoned software engineer with a decade or more experience in the field. You can write code backwards and forwards in your sleep, and often do. You can architect and develop and design and integrate and document and test and debug. You know your shit and you know it well.

But lately, your company has not been showing you the love you want. So it’s time to look for a new job.

You polish the resume, you find some matching places on Wellfound or levels.fyi or, god forbid, Indeed, and make some inquiries. Of course as a seasoned technical professional, your resume is solid and you get some responses. You go through the first few rounds of interviews: the technical screen with the recruiter person, a more substantial interview with the hiring manager, and then you get to the technical interview where you are meeting another engineer (or several) in the company, someone whom would probably be your peer(s). You talk through some code samples of your stuff, maybe dive into some of your open source work or another past project, and you are absolutely crushing it. You nail the small code points to show you know how to write code, and the big architecture things to show you know how to build systems. And then the interviewer says the following:

We’d like take fifteen minutes for you to take a look at this sample project we wrote and tell us what it does.

OR

We’d like you to find the problems in it.

OR

We’d like you to finish writing this code and get it working.”

And you panic. Your heart starts pumping faster, your breathing accelerates, you start sweating. And you start making mistakes. Which only makes things worse and you start spiraling. But you certainly do not want to show any sign of upset or consternation; you cannot show any weakness, but things are going sideways and they are going there fast. And you’re not at all sure how you got here.

GUN TO THE HEAD INTERVIEWING

This, my friends, is what I like to call “Gun To The Head Interviewing”. I call it this because they have taken a task that is normally something you do in your own way, at your own pace, in a measured and even approach, and they jammed it into fifteen minutes, with a bunch of people staring at you and demanding you perform like a trained seal at SeaWorld. It is hyper aggressive, it is extremely stressful, and it is completely unrelated to how you would actually perform the job.

Given any of the above possible questions asked by the interviewer, to expect a software engineer to read the code, understand it, understand the poorly defined problem space, validated that the code works or not, and write or discuss a solution, IN FIFTEEN MINUTES is not at all realistic.

Yet, this keeps coming up. We keep seeing this kind of “pressure test” to verify a software engineer can write code. Except it is not testing writing code, it is testing performance under pressure and that is a very different thing.

Also, without even realizing it, this comes close to being discriminatory. It favors people whom do not suffer from anxiety or anyone whom could be triggered in pressure situations. It does not mean that these disabilities make people worse employees, it just means that you as the interviewer are not opening yourself and your company up the the wealth of diversity that these candidates can offer you. You are doing a disservice to your employers.

Software engineering is a field known for requiring thoughtful, researched, and reasoned answers. To expect someone whom works in this field to perform under extreme stress is not testing whether or not they can do the job you need them to do, it’s testing whether or not they can handle your extreme aggressive bullshit. Honestly, it should be a red flag to the candidate of a sign of things to come if they do take the job. They should put that at the top of their list of reasons to not accept.

Interviewing is a process for establishing an initial space of trust. Bi-directional trust between employee and employer. The employer wants to believe that they will get the talent that they pay for and that the employee can do the job needed. The employee wants to believe that the company can deliver on their promises of pay and growth and mission. The interview is the process by which this initial trust is established. At the end of the interview when the offer is on the table, it all comes down to this initial trust.

HOW WE INTERVIEW BETTER

So, of course, this begs the question of how a hiring company should be testing whether or not someone can code during an interview. How does the hiring party truly assess a candidate’s ability to sling code without making it an aggressive pressure situation, one in which they are given time to reason and react in a thoughtful approach.

First, I would argue that if you cannot judge an engineer’s technical ability just by having a conversation with that engineer then you have no place being involved in the interviewing process at all. Asking technically deep questions and probing around the answers and into the details can give you a solid idea for a candidate’s technical abilities. It might take a little longer to get into these details, which can run afoul of HR’s desire to have every interview last less than thirty minutes, but it is time well spent.

Second, do not spring the problem on the candidate halfway through the interview. Tell them the problem a few days before the interview so they can spend a little time thinking it through. Share the sample code, if any, well before hand to give them a chance to read it, to try it out, to understand it. Give them time to get comfortable with what they need to talk about instead of forcing it adhoc at the last minute.

Now, there are those out there that are worried a potential candidate will just search the internet for the answer and copy and paste that. To this I say, “So what.” We all copy code from the internet, we all search DuckDuckGo for the answers. By obfuscating the question ahead of time, all you have done is tested whether or not we have searched Google for the answers ahead of time and guessed correctly what your toy problem might be. A tiny bit of conversation and inquiry around the answer a candidate is providing, even if they copied it right in front of you, will give you a very strong sense as to how well they understand what they are doing.

Third, time boxing to fifteen minutes or thirty minutes or whatever is entirely unrealistic. We all know that we as software engineers are incredibly bad at estimating and we know that holding us to the estimates of others is an awful idea. So why are we doing this for our candidates. Everyone works in their own way, in their own time frame. I like to write code in a very iterative manner where I keep refining over and over until I have the right solution, but that all goes out the window if you tell me I have fifteen minutes to do it in or your going to shoot my brains out.

Finally, do not do this with more than one interviewer. For each interviewer you add to an interview panel you ramp the pressure up by an order of magnitude. And for each interviewer present, the process for the candidate becomes more a performance and less a demonstration of ability. If you want to do these kinds of panel interviews, do it in the earlier phase where the candidate is intentionally being asked to present something to the group, an open source project or past project on which they have worked. This gives the candidate a chance to prepare for the act of presenting in front of a group and to organize their thoughts into a clear and concise form for transmission to a group.

THE PERFECT INTERVIEW PROCESS (IMHO)

All the above is all well and good to talk about to discuss, but how does a company put it into practice? How do we vet a candidate and at the same time build good rapport, good trust with that candidate?

Here is my ideal way to do this. Some may feel this is too much of a commitment, too many people involved, too much time required. From my experience companies generally spend around four to six hours interviewing a candidates, over four rounds, with four to five people. My process below requires six and a half hours, six meetings, and five people. It is a little bit larger than what we are doing today. But I would also argue that what we are doing today is not working and a little bit more may be exactly what we need.

  1. The Recruiter Screen – 30 Minutes – Recruiter talks with candidate and tells them about the company, the role, and the benefits. The candidate is given a chance to give their five minute experience summary about their career. A few simple questions from the interviewer here can also be used to make sure the candidate meets the minimum technical requirements of the job.
  2. The Hiring Manager Screen – 1 Hour (minimum) – The interviewer, usually the manager looking to hire someone for their team, talks more in depth about the role and the work. The Candidate does their five minute experience summary again, but this time the interviewer dives deeper into the experience. This is the chance for the manager to ask deeper technical questions and drill down. This is not quite the technical deep dive, but it should help narrow the field more.
  3. The Technical Presentation – 1 Hour – The candidate is asked to present a past project, including showing code samples, and to talk about some of the challenges they faced in delivering their solution. The presentation portion of this can take up to thirty minutes, but the remaining thirty minutes should be used by the interviewing panel to ask follow-up questions and delve far more deeply into the technical answers. It should be noted that the point here is not to pick apart the solution or shit on the candidates choices. Instead focus on understanding why the candidate made the choices they made and whether or not they would make those same choices again. It is not about pointing out errors or other ways to do things, it is about seeing if the candidate can find those things themselves.
  4. Technical Conversation – 1 hour – This is where you have a conversation with the candidate for their technical knowledge. This is not a challenge/response kind of thing, but a chance for you to understand how the candidate thinks about specific technologies while also assessing how much of that technology they know and understand. Your questions will need to reflect the experience the candidate has with the technology, but should never come across as a “prove to me” kind of thing. Again, keep it conversational, but steer it deeper and deeper. There are not wrong answers here, only wrong questions. Ask generalized questions about the technology and then drill down into the details. Why does the candidate like or dislike a specific technology? How would they overcome a dislike? How would they make the technology even better. Get specific. If done correctly this replaces any need for a live coding demonstration entirely.
  5. The Culture Interview – 1 Hour – The interviewer, usually a senior leader in the company, meets with the candidate to determine if there is a culture fit for the candidate. Not a technical discussion, but a discussion about culture and communication and process and flow.
  6. The Reverse Interview – 1 Hour – The candidate interviews the hiring manager this time for the candidate to determine if they want to work for this person, team, and company. This is about giving the candidate a chance to see how the company is going to live up to their needs and expectations.
  7. Negotiations – As long as it takes – Always negotiate.

I realize that doing this kind of interviewing is not easy. It’s a significant time commitment, on both sides of the table. But as a good friend recently said (paraphrased)… you are entering into a long term agreement, for hopefully many years; spending six hours or more on this is not a bad thing. The amount of money, and I’m talking beyond just a salary, is enough to warrant taking your time and making sure you are getting these things right.

AND FOR THE CANDIDATES

Of course, all this advice is about how the interviewers should be doing things, but what about when you, as the candidate, get sideswiped by the pressure interview. How do you handle it when John Travolta points a gun at your head and demands you hack or die? (This is a reference to the movie Swordfish where in the very early part of the movie John Travolta puts a gun to Hugh Jackman’s head and demands he hack the government. It is very unrealistic but it is also a perfect example of Gun To The Head Interviewing.)

And just a caveat before I dive into this here: Doing some of what I am about to suggest can be risky. Companies do not like to be told they are doing these thing wrong or discriminating or not amazing people. Their process has worked for three other hires and it should work for everyone. Yet, if you do not speak up, if your remain silent and suffer through their very poorly designed process you are setting yourself up to fail. It is a risk, to be sure, but we are not going to get better as a society at this unless we try to change the shitty ways we do it.

A lot of times these type of interviewers say they just want to “see how you think and reason.” But what this really means is that they want to see if you think and reason exactly like they do, which I assure you, you do not. This is a huge disservice to both you and the company. You do not think like they do and that is actually a really good thing for everyone involved. Different opinions and different ways to approach a problem are really important for innovation and innovative growth. A bunch of “yes men” is a huge disadvantage in the marketplace and in the growth of your company.

Those caveats aside…

Initially, I would recommend trying to avoid it altogether. Usually, very early in the interviewing process a company will describe the steps they are going to put you through. This is the first chance to speak up. You must champion you and say something like: “Unfortunately, I have found that I do not perform really well at pressure coding. I’m wondering if we could try a different approach.” And then outline something that might work better, something like what I propose above. Feel free to reference this article to them and suggest that they do better. At worst you lose the interview entirely, to which I suggest maybe this is not the company for you in the first place.

Next, challenge the time constraints given you. A recent interview of mine suggested I would be able to understand the problem, read their code, validate it, figure out what I would do better, and write that code, in less than fifteen minutes. When faced with this situation, be frank and up front and say something. Unfortunately, most interviewers are not going to be happy when you push back like this, so I refer you back to point number one above in trying to avoid it altogether.

Finally, when faced with no other option, here is the sure fire way to get through this process. Write this down on a post-it note or some such and stick it next to your screen so you can refer back to it without anyone seeing.

  1. Say “I’m going to read the code” and then read the code through, line by line, no matter how long it takes. Do not talk during this part, just read the code.
  2. Pay careful attention to any tests they are running or that can be run to validate any potential changes you are going to make. The tests almost certainly signal the problem areas.
  3. Run the tests/code. Often times the test results will hint at where the problems are. Do this even if they have stubbed in a place for you to write code. Get a sense for things.
  4. Outline where you think the problems are or what you think needs to be done. Do not just go fix them or write code directly. Take your time. Write these out in pseudo-code in a new file and enumerate them. Reorder as you go, moving the more important ones higher.
  5. Write code incrementally and test your solution as you go. Write a couple of lines, run the code. Write a couple of more lines, write the code.
  6. Do not spend any time on boilerplate. Languages like typescript are great for defining types and type safety. Do not waste your time on this. Solve the problem instead and then if you have time, which you will not, go back and add this later.
  7. When wrapping up, talk about all the things you would have done if you had more time. These can include type safety, validation, error handling, abstractions, pattern implementation, etc.

PUSH BACK FOR ALL OF US

Again, it is a risky move to push back, to challenge someone you are hoping to collect a paycheck from. It is even more risky when you consider how much you need a paycheck, which the media says nearly 60% of people in the USA do on a day to day basis.

Yet, we must push back. These practices are not helping anyone. Companies consistently lose out on quality talent while complaining there is not talent out there; candidates get overlooked and sidelined just because how an interviewer approaches a problem is different than some one else; everyone loses out.

Now, I want to be clear here that not all companies do this. Every company you interview is going to be different. I might argue that that is not actually a good thing, but that is an entirely different blog post and article for the future. Instead, I encourage companies to reconsider their practices and try to implement a hiring process that is more equitable for everyone, more open to alternative points of view, and far, far less stressful for everyone involved.

The system of pressure interviews for software engineering has to change. We cannot hold a metaphorical gun to a candidate’s head and expect to produce results. It is unrealistic, aggressive, discriminatory, and just poor for business. And the only way we change this is to demand change in the first place. We, as companies, as interviewers, as candidates, need to speak up, point out the fallacies in these practices, and strive to change them. It is only by striving for change that we can hope to move the needle forward and make the process better for all.

So stop accepting this as fact, and start speaking up to get it changed. It’s the only way.

permalink: https://www.arei.net/archives/307
Writing Code for Other People

Writing code is hard. Getting code to work in a repeatable, consistent fashion that addresses all of the possible use cases and edge cases takes deliberate time and fortitude. Yet getting code working is only half the battle of writing good code. See, good code not only solves the problem, it does so in a manner that other people can use your solution to not just solve a problem, but use it to understand the problem, use it to understand the solution, and use it to understand how to go beyond it. Good code works, but great code teaches.

So, with that in mind, how do we as engineers, write great code? What separates code that merely gets the job done from code that empowers people who read it? How do we transform our code from simply functional to deliberately empowering?

Below I present my ideas on what makes code approachable to others. Following these techniques will make your code more readable by others, more understandable by others, and more reusable by others. To meet these goals requires four specific areas of your attention: readability, context, understandability, and re-usability.

Readability: Write Beautiful Code

The first step in writing great code is to make it readable to others. By the very nature of using a higher order programming language, one would think that readability is solved, but a programming language alone is not enough. A programming language, like any communication system, like any language, is merely a means to express an idea. It is how we use that language and how we structure that usage that makes it beautiful or not, that gives it real understanding.

1. Indentation Conveys Hierarchy

Most modern programming languages do not require indentation. You could simply write all your code like this:

function add(x,y) {
return x + y;
}

and it would compile and execute just fine. But clearly, there is a reason why most modern programming languages do allow for liberal indentation. This code with indentation is infinitely more readable:

function add(x,y) {
    return x + y;
}

We use indentation to indicate hierarchy. This works because of how we visual scan things with our eyes. Western language readers are taught to scan from top to bottom, left to right. Indentation plays into that. As you scan down the code from top to bottom, the left to right indentation allows you to easily ascertain hierarchy within the code.

The return statement in the above code clearly is subordinate to the function definition, merely by that act of indenting it.

In fact, beyond even most computer languages, most descriptive languages like HTML, XML, CSS, JSON, etc, all support indentation to imply hierarchy. One of the hardest things of all when inheriting someone else’s code or data is it not being well formatted. Reformatting and format commands work mostly, but not always, and that is where thing can rapidly fall apart.

2. Meaningful Whitespace

Indentation is not the only way we convey meaning in our code using whitespace. Consider this code:

const FS = require('fs');
const contents = FS.readFileSync(process.argv[2], 'utf8');
let lines = contents.split(/\r\n|\n/);
lines = lines.map(line => {
    line = line.toLowerCase();
    line = line.replace(/[^\sA-Za-z0-9-]/g, '');
    line = line.replace(/\s\s|\t/g, ' ');
    return line;
});
lines = lines.map(line => line.split(/\s/));
const words = lines.reduce((words, line) => {
    line.forEach(word => {
        words[word] = words[word] + 1 || 1;
    });
    return words;
},{});
Object.keys(words).forEach(word => {
    console.log(word + ' ' + words[word]);
});

Like our indentation, this is perfectly executable code that will run and do the several “things” it is intended to do. But understanding what those “things” are is not so clear.

Code runs in an organized top to bottom fashion, and we mostly structure that code into logical steps. The code reads a file, the code breaks the file contents up by newlines, etc. These are the steps that your code performs.

If you look at this code you can even begin to see each of these steps. It might take a minute or two, but the steps are there.

Now, look at this code again:

const FS = require('fs');

const contents = FS.readFileSync(process.argv[2], 'utf8');

let lines = contents.split(/\r\n|\n/);
lines = lines.map(line => {
    line = line.toLowerCase();
    line = line.replace(/[^\sA-Za-z0-9-]/g, '');
    line = line.replace(/\s\s|\t/g, ' ');
    return line;
});
lines = lines.map(line => line.split(/\s/));

const words = lines.reduce((words, line) => {
    line.forEach(word => {
        words[word] = words[word] + 1 || 1;
    });
    return words;
},{});

Object.keys(words).forEach(word => {
    console.log(word + ' ' + words[word]);
});

The logical steps of our code are much clearer to see, much cleaner to read. The only change between the two examples is whitespace.

This process of visually splitting logical steps of code up is called meaningful whitespace. Meaningful whitespace is the art of using well placed line breaks to chunk code into discreet steps. Just like indentation, we use whitespace, new lines, to organize the logic of what our code does for other people to easily understand. And this is about more than just breaking up large chunks of text. Smaller blocks of code are easier to understand when examined than larger ones. Visually when reading this code one can easily understand that each of these steps does something in common and works together to achieve some result.

This is another reason that most programming languages ignore whitespace.

When inheriting someone else’s code it is not uncommon to find a complete lack of meaningful whitespace. The burden this places on the developer to go through the code and chunk it up correctly is significant and slows down their rate of understanding. By grouping your code into chunks that are digestible and understandable to someone else you aid them in time to understanding.

Context and Comments

Up to now we’ve talked about how to make our code more readable. Indentation and Meaningful Whitespace provides readers of your code super easy cues to help the scan and breakdown your code and this makes the code more readable. Readability is the first key to great code.

The next step to great code: context. That is, once my code can be easily read, how do I give it meaning so it can be easily understood. And the keys to understanding code is understanding the context of what any section operates within and on, and how the logic flow of any section operates.

3. Contextual Naming

Fun fact, excluding comments and data, the only things in your code that you get to name are classes and variables and functions. Everything else is a pre-defined keyword or operator of some type. This is why class and variable and function naming is so important, because it is the only part of your actual code where you can provide context to what your code does.

Consider the following code:

function x(y) {
   return y < 1 ? 0
        : y <= 2 ? 1
        : x(y - 1) + x(y - 2);
}

What this code does is not obvious by just reading the code. Providing a contextual name to the function changes everything here:

function fibonacci(y) {
   return y < 1 ? 0
        : y <= 2 ? 1
        : fibonacci(y - 1) + fibonacci(y - 2);
}

Class names, function names, variable names, these are opportunities to describe what they represent (classes), what they do (functions), and what they hold (variables). The name provides context to the role in the system.

Now, there are a few special rules around this, for sure. If iterating a number in a loop, we use i for the index. This is a well-known convention. But even this convention can stand breaking some times. If the context is more important than the convention, always go for more context.

4. Convey Logic through Comments

Class names, function names, and variables names all provide context as to their roles, but program logic is dictated by keywords and in almost all programming languages there is no way to change keywords. So how do we provide context and meaning around program logic?

That is where comments come in, and specifically inline comments within the code. I view this differently than documentation. Documentation answers the bigger question about the code purpose and how people use the code. Comments, instead help to describe the logic and the context around the logic in the code. (We will talk about documentation in greater detail in a little bit.)

Comments within your code are simple useful hints as to how the logic within the following section of code works. Have you ever read a block of code and then wondered to yourself what is going on? That is a failure of commenting. The initial author failed to recognize that another developer would understand the code, and that stems from either hubris or laziness, neither of which are particularly good habits to have as a developer.

Now, this is not an advocation for writing a comment before every line of code. I am a firm believer in writing code that is simple to follow and does not require comments to understand, but that is not always the case. And over-commenting creates a whole separate level of problem. Instead, comment where it is needed. When considering any section of code (which you nicely formatted with whitespace), ask yourself “is it readily obvious what it does?” If the answer is even slightly a no, take a second to add a comment or two of context.

This code from an earlier example is a lot more understandable for a handful of comments, and adding these comments as the logic is written takes almost nothing:

// Bring in the standard filesystem library
const FS = require('fs');

// Read the file
const contents = FS.readFileSync(process.argv[2], 'utf8');

// Split our file content into lines and
// convert each line to an array of simple words
let lines = contents.split(/\r\n|\n/);
lines = lines.map(line => {
    line = line.toLowerCase();
    line = line.trim();
    line = line.replace(/[^\sA-Za-z0-9-]/g, '');
    line = line.replace(/\s\s|\t/g, ' ');
    return line;
});
lines = lines.map(line => line.split(/\s/));

// Count the occurance of each word in all the content.
const words = lines.reduce((words, line) => {
    line.forEach(word => {
        words[word] = words[word] + 1 || 1;
    });
    return words;
},{});

// Output our word cloud and frequency 
Object.keys(words).forEach(word => {
    console.log(word + ' ' + words[word]);
});

There are two different uses of comments. First is what we covered above, to add logic understanding to your code. The second use of comments is to add documentation around how your code is used. This second type is covered in a later section, so please keep reading.

For logic comments, the general rule of thumb is to use // instead of /* */ in your code. /* */ block style comments are really more for providing documentation than doing quick one-off logic comments.

Also, when doing logic comments, do not hide them on the end of a line. Put them on their own line so that they are more visible and easier to see.

const superWeirdThing = !superWeirdThing; // DON'T COMMENT LIKE THIS

Organize Your Code

Understanding about your Code is done partially through providing context about what that code does, but equally important to understanding is organizing and structuring your code in a repeatable, understandable, and accepted fashion.

5. Organized from Top to Bottom

The first step in organizing your code is to physically lay the code out in logical sections. Almost all modern programming languages have a recommended approach to organizing your code. Different languages have different approaches to this, and even different approaches within that. JavaScript, is one of the more chaotic languages with little formal agreement on the best way to do this. However, there are still little things you can do without a formal community accepted standard. For example, import/require statements are expected to be first in the file.

The important thing here is to keep like with like. Do not put two class definitions separated by some variable assignments. Put the class definitions together. This is an implied structural organization for your code and it applies at the file level, the class level, and the function level.

A common layout pattern for a file in many languages is:

  • Require Libraries
  • Declare Constants
  • Declare Variables
  • Declare Classes
  • Declare Functions

A common layout pattern for a class in many languages is:

  • Declare Static Members
  • Declare Members
  • Declare Static Methods
  • Declare Constructor
  • Declare Methods

There is a common layout pattern for writing functions in some languages, but JavaScript is a little more lose here, so I am not going to offer a formal pattern. Instead, I will tell you how I like to structure things for a function:

  • Arguments and Argument Checking at the top
  • Declare variables as I need them, but nearer to the top if possible.

6. Separation of Concerns

In computer science the concept of Separation of Concerns (SOC) allows one to divide a computer program into logical sections based on what each section of that computer program does. In Web Development, HTML, CSS, and JavaScript is an example of Separation of Concerns where HTML is responsible for the content, CSS is responsible for how the content looks, and JavaScript is responsible for how the content behaves. Model-View-Controller (MVC) is yet another Separation of Concerns system that is often used.

But beyond a fancy CS Degree, separation of concerns means creating one thing to do that thing well without side effects. It means writing many functions that each do something well, instead of one single function that handles all the cases. It also makes more sense to break things up into smaller discreet units in terms of reusable code.

It also means letting the various pieces do exactly what they are good at and not trying to force them to do tasks that are better suited elsewhere. It is this reason that we should not write style information in JavaScript when a perfectly good CSS rule will do. CSS is optimized for dealing with these things and JavaScript is not, so why would you try to outwit it? Of course, there are always times when you have to break these rules, but they are very, very rare.

Related to SOC, in Computer Science there are three other principals to be aware of:

  • Coupling – Coupling is the measurement of how interdependent two modules or sections of code are to one another. There are a lot of types of coupling between two sections of code with a lot of fancy names and descriptions. For now it is best to just understand it this way: How dependent on A is B and vice-versa? Can A run without B? Can B run without A? Two modules are said to be Tightly Coupled, if they are very dependent on each other. Conversely, they are said to be Weakly Coupled it they are only slightly dependent on one another. Coupling in code is necessary because code executes linearly. However, designing classes and functions to be independent provides for better re-use and more understandable code. Consider some code you are looking at and that moment when you say, “Well, this calls function X, now where is function X and what does it do?” This is an example of coupling.
  • Cohesion – Is somewhat the opposite of coupling as it measures the degree to which things within a section or module of code belong together. This is usually described as having High Cohesion, meaning things within the module belong together, or Low Cohesion, meaning that they do not. In Object Oriented Programming, high cohesion of what is in any given class is a design goal. The class should do what it sets out to do and only what it sets out to do. If I have a Class called Car and inside that class there’s a function called juggle() it probably means my class has poor cohesion (and interestingly probably also implies tight coupling with something else). Side Effects from running a module or function implies poor cohesion. If function juggle() also fills the car with gasoline, either this is an amazing juggling trick, or there is a side-effect in the code.
  • Cyclomatic Complexity – Cyclomatic Complexity is the measure of how complex a section of code is, based on the number of paths through the code. That is to say, the more non-linear outcomes the code generates, the higher its Cyclomatic Complexity score. This is often judge by the number of conditional statements in your code. A function that has no conditionals, is said to have a Cyclomatic Complexity of 1. If it had a single conditional with two out comes (a typical if/else for example) it would have a Cyclomatic Complexity of 2, and so on. Cyclomatic Complexity is not bad, in and of itself, but the more complex the code, the higher the possibility of failure, and the more testing required. A function should strive for a lower complexity score if possible.

So, we talk about these three measures of code in order to reinforce the point of Separation of Concerns. A Separated module is one that is not coupled directly to another module, is cohesive in what it does, and reduces complexity.

When you find yourself in code that is hard to follow, it is almost always because it fails on one of these three factors.

One way to reduce complexity and structure your code better is to identify the cohesive sections of your code and isolate them into Logical Blocks. A Logical Block is a portion of your code that requires multiple lines to complete a single task. As we talked about earlier, breaking your code up with meaningful whitespace makes these logical blocks easier to identify. Adding comments makes them clearer in their purpose.

But a Logical Block also is a good hint as to how to abstract logic into small, more cohesive behaviors. For example, if your logical block needs to get reused, isolate it out into a new function.

Also, it is worth mentioning about Code Folding. Code Folding is a feature of most modern IDEs that allow you to hide (or fold) blocks of code. Code Folding in your IDE can be a powerful code reading tool. However, Code Folding requires certain syntax structures to work. By thinking of your code as Logical Blocks you can see where Code Folding opportunities could exist, if the logical block was in an acceptable structure.

Thinking about your code as a Logical Block allows you to more easily see other ways to do the same thing, ways that may be cleaner and more cohesive. It illustrates where there may be room for improvement in your code as well.

For example, this code is a Logical Block.

let sum = 0;
for (const num of numbers) {
    sum += num;
}
const avg = sum / numbers.length

You could change this logical block to be more readable by…

  • Abstracting it out to a function, especially if it is going to be reused.
const avg = calculateAvg(numbers);
  • Wrapping it in a IIFE (Immediately Invoked Function Expression)
const avg = (() => {
    let sum = 0;
    for (const num of numbers) {
        sum += num;
    }
    return sum / numbers.length; 
}());
  • Using a built-in library to do it simpler.
const avg = numbers.reduce((sum, num) => sum + num, 0) / numbers.length;
  • Using a third party library helper function.
const sum = _.avg(numbers);

All of these are better than the first because they create an isolated block that cleanly identifies what it does. With proper whitespacing and a comment above the block, makes this tight, readable, well reasoned code.

Code Usability

At some level, everything you see, smell, taste, touch, or interact with is a User Interface. An API, for example, is a User Interface between your backend and your frontend. This goes for the code you write as much as anything else. At some point, another person is going to read your code, run it, and even have to make sense of it. So, build your code like you would an API. Take into consideration how another person is going to interact with that code, how they are going to try and reason about it, how they are going to try and work within it.

Build everything with another user’s experience in mind.

7. Defensive Coding

Software is a piece of code that produces some sort of output, usually based on some sort of input. Now, assume someone is going to run your code at some point. Assume that that person is not going to read the documentation. What would happen? Would it work if the first argument is a null? Would it work if the second argument was a negative zero or a NaN? How your code behaves when executed under uncertain conditions is a testament to the quality of engineering employed in building it.

An approach to handling these questions is called Defensive Programming. In defensive programming you assume that users will maliciously try to cause problems with your code and therefore your code must guard against their attack. This means making the assumption that all data is tainted and must be verified to be correct.

Here’s an example:

if (this.onClose) this.onClose();

The assumptions made by this code are not defensive. The snippet assumes that onClose is a function and then executes it as a function. A corrected version of this code might look like this:

if (this.onClose && this.onClose instanceof Function) this.onClose();

But there is another assumption here as well, that executing this.onClose() will work? So even more defensive code might do it thus:

if (this.onClose && this.onClose instanceof Function) {
    try {
        this.onClose();
    }
    catch (ex) {
        ... do something about the error ...
    }
}

There is even an argument, that another defensive point might be that if this.onClose() is a long running process, it could cause your thread execution to grind to a halt and an even more defensive posture might be thus:

if (this.onClose && this.onClose instanceof Function) {
    setTimeout(()=>{
        try {
            this.onClose();
        }
        catch (ex) {
            ... do something about the error ...
        }
    },0);
}

That last form might not work for all situations, but it is extremely defensive.

The key to writing code defensively, is to checking your inputs early in your code. Make sure what you are getting is what you are expecting. If your function only works with even numbered integers, you better make sure it takes only even numbered integers.

The same is true for classes. Write your classes to assume that users are going to try to harm them and break your code. This means writing defensive methods but also, preventing unwanted or unknown class state mutation. Setters are a great way to protect your class from bad user input and should be used whenever possible instead of exposed public members. Consider this class:

class Car {
    make: 'Chevy';
    model: 'Malibu';
    mpg: 25;
    tankSizeInGallons: 14;

    toString() {
        return `The ${this.make} ${this.model} can go ${this.mpg * this.tankSizeInGallons} miles on a single tank of gas.`;
    }
}

The publicly exposed make, model, mpg, and tankSizeInGallons are all mutable without any thought for defensiveness. A malicious coder could cause quite a problem with these variables. More defensive style would be to use private variables (or Symbols) to hold the values, and provide public getter and setter functions to expose them. The setter functions, in particular, can do error checking and ensure type and values match acceptable inputs.

8. Code is a User Interface

Your code, whether a component, a utility, or an API, is going to get used by someone else. This is even more true when one considers that sharing of code online has become pervasive. And if you work in a team setting at all, this compounds the likelihood someone else will need your code. I do not care how throw away you believe your code is, assume that someone is going to use it.

To that end, and as stated before, treat your code like a User Interface. This means designing your code to be read is important, but also designing your code to be used by others is equally important. Ask yourself how another user is going to come in contact with your code. If they see the function signature, does it provide enough information as to what it does and how it works or does it require additional information? If they read your code, is it obvious how they would step through it from line to line or does it require more contextual logic?

A lot of what we have already covered in this document describes how to address writing code toward usage by others, but it bears repeating here:

  • Design your code to be read by others.
  • Highlight hierarchy with indentation.
  • Separate discreet sections of behavior with whitespace.
  • Name classes, functions, and variables contextually.
  • Provide comments around programming logic.
  • Clearly Separate your application concerns.

Following these steps is the first part of writing code like a user interface.

The second part is to always be asking yourself, how will others use my code? Challenge yourself to always be mindful of this question and your code will be the better for it.

Most professional coding work is a shared environment where everyone is always working on each other’s code; any team experience really is. As such it is critical to always be considerate of how others are going to use your code. Whenever you write a new component you should be consciously aware that someone else may very well need what you are building for their own purposes. Write your component with that reuse in mind. But also, be insatiable in your desire to make your component as accessible to another developer as possible. The more we reuse one another’s components, the better our product becomes.

9. Documentation

Finally, let us talk about documentation. To many software engineers the act of writing documentation is seen as an after thought, something to be farmed out to some loser English Literature person desperate for a job. But the reality of software engineering is that it is only 50% about writing code. The remaining part of Software Engineering is communicating with other people.

As we have stated above: someone else is going to use your code. It is inevitable. Given that, telling those users how to use your code is critical. If you fail to communicate how your code runs, you might as well not have written the code at all.

We talked about comments in a prior section. Comments are used to describe the logic your code follows. Documentation, on the other hand, describes how a user of your code interacts with the code and gets it to do what is expected.

Unlike inline comments to describe logic blocks, documentation is about describing the interfaces of your code, the places where others will interface with your code. We write documentation to tell others whom are using the code what it does and how it is used. This does not mean you document every single function and variable. Instead this means you document every public facing part of your code, every place other will interact with it.

Fortunately, in this day and age, you do not even have to lift your eyes out of your IDE to write documentation. Any modern language today has an associated documentation language built into it. Java has JavaDoc, Typescript has TSDoc, JavaScript has JSDoc, etc. These documentation languages assist you in describing your class, function, variables, every aspect of your code as much as you want.

Even better, most modern IDEs have these documentation languages built right into them and have tooling that makes writing that documentation easier.

Here’s an example of using JSDoc:

/**
 * Move the selection to the "first" section.
 * @return {void}
 */
selectFirst() {
    const first = this.sections.find(section => section.startHere);
    this.select(first || this.sections[0] || null);
},

In this example the comment block that precedes the selectFirst() function describes what it is and what it returns.

Documentation languages vary by programming language, so please read up on your particular one.

One note about documenting that to address: Your code is not self-documenting. Many, many people have said this over the years and every single one of them has been wrong. What they are really saying is their code is Readable and that is enough. In some case this is true, but mostly it is not. Please add documentation to your code about what it does and how it is used.

Denouement

So that’s it. Above we outlined nine (9) separate ways to make your code more readable, more descriptive, more defensive, and more user friendly. It’s a lot, to be sure, but each of these approaches is a small step towards the larger goal of writing better code that is more than just runnable.

We recognize that this could be seen as a lot. We all know that software is a demanding job full of demanding bosses and customers who think it is super easy. And these best practices they take additional time, they cost cycles, and adding them can seem overwhelming. However, starting small is the key here. When considered as a whole, these rules can seem daunting. But individually, they are not so rough. Try adopting one or two of these at first, then a few weeks later add another, until they are all second nature. It’s a good bet you are probably already doing one or two of these without even thinking about it. So, add a few more, then a few more, and next thing you know you will be shipping code you are proud to have others to read.

permalink: https://www.arei.net/archives/302
THE HUMAN RELATIONSHIP PACKAGE – LEARNING FROM EVENT-STREAM

Authoring and sharing a package is a relatively simple activity these days. Systems like GitHub and npm make it easy for an author to share and publish their work. This is a great thing; it is a wonderful time to be a developer and involved in a developer community.

However, there is a cost to sharing and releasing one’s code that is often unwritten and overlooked. Authors may feel like they are sharing their grand ideas with the world, but they often fail to understand the expectations that go along with publishing a package. Likewise, users think that they can just download a package in an all-you-can-eat buffet of open source software, never realizing that the very success of the package they are downloading requires them to participate in the community of that package. Both sides have responsibilities in the situation and for either side to ignore those responsibilities is a recipe for failure.

Several months ago the JavaScript community dealt with this very problem. A highly respected package author, had shared a package (event-stream) he wrote for fun and it got very popular. As is the case, popularity came with more requests for support and bug fixes. In short order requests became demands, and the author’s time and interest lagged. All the while, very few users where willing to step up and help. So when someone did step up and help, the author accepted and turned over the repository. Shortly thereafter malicious code was introduced into the package and everyone who relied on the package became an potential victim.

In many ways event-stream was a kind of “perfect storm” event that may be unlikely to occur again. Yet, the event-stream saga is a very real example of the cost of publishing code for others to use. Some would blame the author for not providing a better notification that he was handing the package off to someone else. Others would blame the users for not supporting the package either financially or through participation. Still others would point out that this is a professional community and as a professional one is required to know and understand how their software works and the systems it relies upon work. Yet, no matter how we throw the blame around, the reality is everyone paid the price for the failure, and everyone (the author, the users, and the greater community) is culpable.

A dependency, a package, is more than software; it is a relationship. Like all relationships, it comes with a set of expectations between the parties involved: party A in a relationship expects certain things of party B, and in turn party B expects certain things of party A. When either side fails to live up to those expectations that is when the relationship degrades and fails and people get frustrated, hurt, and betrayed.

Consider what one expects when downloading a software package:

  • Function: The user of a package expects that when a package is given X, it will return Y. This is the expectation by the user that a package will work and that it will meet their needs. Occasionally a package fails to work or meet the needs of the user, at which point they uninstall it and the relationship is over.
  • Constraint: The user of a package expects that a package is constrained to doing only what it is intended and described to do and nothing more. That is, the user expects a package to not do other things whether malicious or not.
  • Support: The user of a package expects the author of a package to provide some level of support. This includes well written documentation, responsive bug fixes, avenues of communication, and the addition of new features.
  • Notification: The user of a package expects to be informed when the package changes, how it changes, and why those changes were introduced. Notification allows the author to communicate with the users directly.

But what a lot of people overlook is the other side of the relationship having its own expectations:

  • Feedback: The author of a package expects that the user of a package will report issues with the package. One can never predict all of the edge cases where code may have problems. The author is dependent on the user providing the details about where those edges are and where they fail.
  • Reciprocity: The author of a package expects that the user of a package will help maintain and grow the package. This is a critical part of the open source equation: that everyone shares in the maintenance and growth of the package for the betterment of all. Without this participation most packages languish and die.
  • License: The author of a package expects that the user of the package will use that package in accordance with the licensing terms. The license is an expression of the legal wishes of the author, but the author lacks little ability to enforce such things. Instead, the author relies upon the relationship with the user to encourage these legal wishes.

When any one of these expectations from either side is broken the relationship is strained. Some times that strain can be minor and survivable, but often it can be completely destructive. In the case of the event-stream story from above, the users failed to fulfill the Reciprocity expectation which in turn caused a snowball effect causing other expectations to fail, which eventually led to critical failure of the entire package.

So how does one address these problems? How do the authors and users of a package prevent failures from occurring in the future? How does one ensure that the participants in our communities get what they need to not just survive but thrive as part of the community?

It begins by recognizing that there is still much work to be done on the systems in use and how the community uses those systems. Now, many of the tools at the disposal of package authors are geared toward providing and maintaining some of the expectations outlined above. GitHub, for example, can be said to be helpful in some of these expectations, but lacking (whether intentionally or not) in many of them. Certainly it enables the Support, Feedback, and Reciprocity expectations, but there is still much room for improvement.

While stronger more supportive systems and tools will help address some of the community problems, one must also be willing to examine and address the individual and community aspects as well. In many ways, the current situation in which the community finds itself is a function of getting better tools without considering the social engineering those tools also require. So any solution must also examine the people involved.

The authors need to to understand that releasing software is not a “fire and forget” situation and that the users have expectations of the package and its author. There must be a realization that by sharing code with the world, one is creating a community that requires nurturing to grow and thrive. When we publish our code with GitHub it is no accident that you immediately get an Issues tab and a Wiki tab. These are the author’s tools to build a community. And as the author one must be ready to accept that they are the leader of this community. That means meeting the expectations of that community to the best of your ability: maintaining the software, ensuring it is safe to use, supporting it as needed, and updating your users about changes.

However, nobody is suggesting that sharing code automatically dictates a lifetime of servitude. Authors must have the tools and community support necessary to lead within the confines of their time commitments. Part of nurturing and enabling a community is knowing how to ask for and accept help when it gets overwhelming and how to apprise your community about the situation.

Yet the author is not alone in this package relationship: the users bear an even greater obligation to the relationship. Every time a user downloads a new package and does not find some way to participate that user is helping to contribute to that package’s failure. It sounds harsh, but one cannot continue to treat open source software as a free ride and expect to suffer zero consequences.

The manner in which a user contributes back to an open source project comes down to two possible actions: time or money. A user can donate time by helping to improve and grow the project: opening detailed issues reports, submitting pull requests which fix bugs or add features, improving documentation, offering support to newer users, helping to lead the project, all of these activities take time from the user and give it to the author to help in the project. Time is the most critical thing an author needs to maintain a project. However, if time cannot be found or is prohibited or prevented by one’s employer, consider donating money or asking the company to step up and donate money. This can be tricky for some projects as questions about who gets the money, taxes, etc come into play. However, if there is an avenue to donate, please use it. If not consider asking the project to add one. Also, for some projects it is possible to buy advanced licenses or support agreements and this can be considered as method of getting money into the project.

Ultimately a package is only as strong as both the author and its users make it. It is incumbent upon both sides of the author/user relationship to commit to their expectations. Anything less is setting oneself up for failure. The event-stream incident is a lesson from which current and future packages can learn. The users share as much, maybe even more, culpability as the author does. An unwillingness to see that is an indicator of a willingness to fail again. And the big failure of event-stream is not that malicious code got injected, it is that no one will learn from it.

Supporting materials and further reading:

https://medium.com/@cnorthwood/todays-javascript-trash-fire-and-pile-on-f3efcf8ac8c7

https://www.zdnet.com/article/hacker-backdoors-popular-javascript-library-to-steal-bitcoin-funds/

https://gist.github.com/dominictarr/9fd9c1024c94592bc7268d36b8d83b3a

https://changelog.com/podcast/326

permalink: https://www.arei.net/archives/293
My Thoughts on Speaking at JSConf Last Call

Speaking at JSConf Last Call is an amazing experience, and not for the reasons you might think. While the experience is still fresh in my mind I wanted to set down my thoughts and observations. If you are finding this too long, skip to the end for my “lessons learned.”

In case you were not at JSConf Last Call here are the pertinent details… I, Glen Goodwin and Todd Gandee gave a talk together entitled “We are Hacks and have been Stealing code for Years.” The talk was about how we all steal code because it is part of the process we use to learn new things and how it is our responsibility to make sure others in our community learn this process. We were very well received mostly due to Todd and I styling the talk as a quick back and forth dialog, hitting the major points we wanted to hit along the way. The audience enjoyed our presentation style, laughing a lot, but listening when we got serious. It was an awesome audience. The video is coming soon. This was my first ever talk at a technical conference and Todd’s second (but largest) technical talk. We are both super grateful to JSConf for giving two unknowns a huge chance on a relatively unknown subject.

So that’s the back story. Now here is where I tell you about everything else. The rest of this article details how we came up with the idea, our process in creating the talk, what is was like to give the talk, and most importantly what we learned from the process.

HOW IT ALL CAME TO BE

So the story of how our talk came to be begins with a Tweet. Todd Gandee, Chris Aquino, Eric Fernberg, and I had all met at JSConf 2013. And every year since then we’ve all tried to make it back to the subsequent JSConf events. Each year when JSConf announcements are made, a tweet goes out among us asking about who is going. For JSConf Last Call this was no different and I asked everyone who was going to go?

Todd replied that he was thinking about putting a talk together.

To this I replied offering to co-speak with him or help out.

It was a pretty big move for me to offer to co-speak. I had always thought about talking at JSConf but never really felt like I had anything worthwhile to say. (See Kevin Old’s talk about Imposter Syndrome) So my desire to speak was there, but I lacked what I felt was a really big idea. But more than that, the idea of speaking scared me a whole lot; not because I am afraid to speak publicly, I have little fear of public speaking, but more because I was afraid to put myself out there. I remember agonizing over even sending my offer to co-speak or help Todd, but eventually I just decided to risk it.

Todd’s answer came back four hours later and we were on the phone talking within a day. He was in.

The good news was that Todd had a good idea for a topic: The great news was when he told me it I got excited. I knew I was excited because my brain kept coming up with new ideas, new angles, new thoughts. When I can’t shut my brain up, that’s a really good sign and the longer my brain keep spinning on a subject, the better the idea.

So we opened a google doc and started spit-balling ideas.

SUBMITTING THE TALK

There were going to be some barriers to working together on this talk; primarily distance. Todd lives in Atlanta and I live in Baltimore, about 700 miles apart. Yet we knew the internet was full of tools for collaborating and we could employ them to our advantage. Initially we started with a Google Doc into which we put our ideas. Then came Google Slides for actually building our slide deck. Eventually we turned to Google Hangouts and ScreenHero for rehearsing together, but that’s getting a bit ahead in the story.

I want to tell you that Todd and I got together every couple of days and constantly refined our idea, worked out the story we wanted to tell, built the slides, etc. I want to tell you that but it would be a complete lie. Life is hard and gets in the way a lot. We are no different, so there was a fair number of stops and starts and really long breaks.

Initially we started by coming up with the JSConf submission form answers we were going to need. After all, there was no real point in working on a talk if we didn’t get accepted. So we crafted a Mission Statement. Well, really more of a presentation abstract, but I really wanted to fit that Jerry Maguire link in there. Then we massaged the abstract into the JSConf submission form.

We had a lot of questions though… Would JSConf allow a pair speaker presentation? Could they even technically support two speakers? Was what we were proposing a good topic? We emailed our questions to various JSConf people we knew from past years. I reached out to Chris Williams, having met him a number of times in mutual local community events. Todd reached out to Derek Lindahl whom he was friendly with from prior JSConf events. We wanted to know if we even had a shot and we wanted to be clear that we were willing to work with JSConf. We didn’t want the fact that we were going to have two speakers put any additional financial burden on JSConf. We were happy to pay our own way, so JSConf didn’t need to comp us extra because of a second person.

The results of our contact with the JSConf staff met with mixed results. I knew Chris was really busy with real life things and was not surprised when he never got back to me. Todd had a little better success with Derek, but it fundamentally came down to Derek saying, “Just submit it. If it’s good it will get selected.” That quote, from Derek, by the way, is the answer to always tell yourself if you are thinking about submitting. Just stop second guessing yourself and go submit it already.

So, on the last day of the submission deadline, after going back and forth a few times on our answers, we submitted our talk idea.

Here’s our initial submission abstract:

Two intrepid developers, who met at JSConf, examine the relationship of sharing code, community, and developer growth throughout the short history of making programming more art than engineering.

In this talk, Glen Goodwin and Todd Gandee will walk back from the present day to the “ancient” past of Babbage and Lovelace discussing how the act of “creative borrowing” influences learning and understanding for computer programmers; how we learn by observing and deconstructing the work of others to make it our own. This includes an examination of past and current models used for “stealing” the (mostly) freely shared knowledge and past work of others like Github, StackOverflow, View Source, and Byte Magazine. Our talk emphasizes the importance of inclusive conferences like JSConf in the growth of junior and senior software engineers. Programmers’ tools of today illustrate the apprentice/mentor relationship more akin to the arts than engineering.

On October 20, 2015, we were officially notified of our acceptance. It was a glorious moment, getting accepted. Rachel White in her own JSConf Last Call talk said she ran around the building screaming upon being accepted… There may or may not have been some happy dance moves on my part; I admit nothing.

THE FIRST DRAFT

And then it sunk in… Now we have to write the damned thing.

Initially we started just coming up with ideas. We had the basic theme of our talk in the form of our title “We’re Hacks and We’ve been Stealing Code for Years.” Great title, but what did it really mean, what does “Stealing Code for Years” really imply. Also, we knew that this being JSConf’s swan song meant something special to us and that we really wanted to show that. In the shared Google Doc we just started throwing out ideas, snippets really, that we thought might be relevant, possibilities.

And then neither of us touched it for weeks. Like I said, life is hard and things get in the way.

Yet, the thing about an exciting idea for me, is that my brain never really lets it go away. So while neither Todd nor I talked about it or added anything new to the Google Doc, my brain was constantly spinning things around, you know, like in a background thread.

Then one day, while sitting in my favorite lunch spot, having my favorite lunch (beer), I had a moment, a vision, an inspiration. “We should totally open the talk wearing ski masks, like we’re trying to protect our identity.” So I pulled out my iPad and proceeded to write this down. I knew that in order to pull off a ski mask based gag, the dialog would need to be very quick, so I decided to approach it like a theatrical scene using a script. And once I started writing it, once I started working in the script format, the words just poured out of me. It helps that I was an English Literature major in college and writing comes very, very easy to me.

So, yes, the first ten minutes of the first draft of the script was written in a bar, on an iPad, while consuming beer.

Explains a lot.

THE SECOND DRAFT

After some initial conversation with Todd about this new script, we again stopped working on the project for a few more weeks. While the first ten minutes of the script was done, the rest wasn’t really coming to me. And what little I did add after the initial burst was a little disjointed. We had all these ideas, but we lacked an organization for the ideas.

And then inspiration struck Todd.

Todd is a very visual thinker where I am a very textual thinker. He thinks by drawing stuff out, where I’m more of a writing stuff down kind of person. They are two different approaches, complimentary at times, but in-congruent at others. So Todd was having trouble thinking about the talk because we hadn’t organized it visually; I was having trouble thinking about the talk because I couldn’t figure out where to go next. And we were both stuck.

And like I just said above, inspiration struck Todd. He called me up. “I made a Trello board to just write down all the slides we need. I need to organize how this is going to go.” So we fired up Trello and we started to work. We first outlined the major points we wanted to hit, like talking about the history of Code Stealing or the section on Community. Those became our Trello columns (Trello Boards). Then in each column, we put the specific points we wanted to make like talking about NodeSchool or Women who Code. Then we could move the Trello columns around to come up with the best way to present the story we wanted to tell, the progression from Stealing Code to Community.

I cannot stress enough how much what we did in Trello saved our talk. It was so instrumental to just organizing what we wanted to do. And once we had that, the script I had to write became super easy to do. I literally copied all the column names out of Trello into the script as section headers. Then a copied out the specific points of each section into the script. A little bit of refactoring on the introduction part, and the script pretty much seemed to write itself.

The Script was completed three days after we did our work in Trello. Well, the second draft was completed. The final draft of the script wouldn’t be done until about two days before our talk was to be given. It probably would have been tweaked right up to minutes before the talk, but we had to pull the slides out of Google Slides and into Keynote to protect against conference internet latency. That meant Todd had the latest copy so I was prevented from rewriting anymore.

TODD MAKES SLIDES

The plan was for me to write the script and Todd to work on the slides. But in order to make the slides, Todd needed a sense of where the slides would go, how they would fit into the script. So we decided that as I wrote the script I would put slide changes in as scene notes, like this:

GLEN: That’s my point… We have an entire industry of tinkers, it’s baked into what we do. [SLIDE: Tinkering, or breadboard in state of repair]

Also, since the script had a certain flow I had an idea of what slides I thought we should use. Plus I knew there were just some slides I had to have in the talk because I love the images, like this IT Crowd one… Mandatory for any talk in my opinion.

So once I really got started on the script, Todd got started building the slide deck out. He started collecting images and putting them in order. I am firmly in the camp that you should never make your audience read your slides, so we agreed to minimize that. Use the images to enhance what we are speaking about, so the content of the talk is the focus, not the images. Turns out when you throw a bunch of humorous slides and make people laugh that also kind of pulls their attention away from the content, but we’ll talk about that later.

Of course, while all this slide work was being done the script was constantly being tweaked, the slides were getting tweaked as well. I was making sure to re-read the script 3 or 4 times a day, fixing typos and refining the flow. Todd was continually adding more slides and I was continually offering more suggestions.

The entire process was very iterative. I’m not sure how much Todd started to hate me at this point because I kept changing the ground out from under him. I tried to minimize the impact of my changes, but it happens. There was also a lot of places where I would change his slides and he would change my script. We had to completely throw out the idea that while I wrote the script and he was doing the slides, neither of us owned our respective parts anymore; everything was shared.

Meanwhile, we both set out to memorize the script. Big mistake.

REHEARSING OVER THE INTERNET

See, the script I wrote was roughly twenty (20) pages of mostly rapid back and forth dialog. Neither Todd nor I have ever done any acting at all, we had zero experience memorizing lines. And let me tell you memorizing lines is incredibly hard. I am a huge live theatre fan, especially Shakespeare, and have always had a lot of respect for actors. In the weeks leading up to the conference my respect doubled, tripled, then doubled again. My hats go off to the actors that can memorize their roles in iambic pentameter.

00000001So we came to the realization that we were not going to be able to memorize our parts. Instead, we were going to have to read from the script as we walked through the slides. So we had to cut and paste each section of text from the script into the slide. Also, because of the rapid pace of our dialog we would need at least a few lines from the next slide showing on the current slide. This, it turned out, was a fair amount of work; and as we kept refining the slides going further we also had to refine the notes for the slide, and the text from the prior slide, and the text from the next slide. It was a constant battle of keeping everything aligned.

About five days before the conference we had our first “live” reading together using ScreenHero and Google Hangouts. We set some time aside on Tuesday night at 9:30 after our respective wives/partners had gone off to bed. I remember telling my partner Jennifer that I would be up “in about an hour.” I went to bed at 1:30am that night. We managed to read the script completely through exactly twice.

This is where I feel the real work began. We moved some slides around, changed some images, changed a bunch of dialog placement, and all that. Every single slide and line of dialog was tweaked and reviewed. It was constantly a work in progress and as I said above, changing slides around meant a lot of rework to get all the alignments correct. Uggh.

I think prior to our first reading that I figured we’d rehearse a couple of times, do the Google Hangout things, then maybe a few reads the day before our talk. Except it became pretty clear right from the first reading that the timing of our dialog was going to be everything. The opening bit with the masks was especially hard for me to get down because of the constant interruption nature of those first 20 lines.

After working into the wee hours on Tuesday, we decided we needed to do it again the next night.

Wednesday night at 9:30 I told my partner Jennifer once again, “This should be much shorter, we just need to read it.” I went to bed Wednesday night at 2:30am.

THE DAY BEFORE THE DAY BEFORE

On Thursday I flew down to Jacksonville. It’s a two hour flight from Baltimore, plus a few hours sitting around in the airport. I re-read the slides a dozen times. I had one particularly long speech that I just couldn’t seem to get down so I keep going over it over and over again. I’m pretty sure the couple sitting next to me on the plane thought I was some sort of crazed lunatic because I just kept staring at my iPad and mumbling to myself under my breath; and then every time the “Points for Glen” slide would come up I would throw my arms up in the air. I’m pretty sure there wasn’t a sky marshal on the flight or I would have been detained.

The preceding Monday Jan Lehnardt had contacted me about sharing a shuttle van from the airport, since he, I, and Patricia Garcia were all arriving at the airport at the same time. I had already booked a rental car, so I just invited them to join me in the drive. Best thing I ever did. Jan is a seasoned pro at talks and Patricia had just given her first one at JSConf EU. I quizzed them both mercilessly for tips and tricks and perspectives. They were both very open and sharing, despite having been awake far more hours then they should have and that it was like 2am in their local time. I appreciate their company and helpfulness.

Additionally, Jan pointed me at a blog post he did on the subject of speaking at conferences which he emailed me later. It’s a great quick read and you should totally read it. It also gave me the idea to write this up as well and share my own experiences. So much thanks to Jan and Patricia.

THE DAY BEFORE

Todd (and another friend of ours Chris) arrived the next morning. Let the dress rehearsal begin.

Remember when I said I had constantly been tinkering… It’s true. The first thing I did before we even rehearsed was drop a slide and 2 lines of dialog. We also realized (remembered actually) how shitty the hotel WiFi is. This is important because we had used Google Slides to write our presentation. The first time we ran through the Google Slides on site we realized waiting for each slide image to load wasn’t going to cut it. We ended up exporting the slides to Keynote. For the most part this was fine as most everything moved over except for one crucial part: We had done some color coding of our speaker notes to indicate text said on a prior slide, or text coming up on the next slide, or even whose line was whose. That didn’t transfer over. Todd was a trooper and reformatted all those things Friday night instead of sleeping.

We rehearsed probably eight times that day. Chris (our other friend) came in and pretended to be our audience despite the fact he had heard our talk five times already at this point. Him being there was so critical because it acclimated me to the idea of an audience. Let me focus on them instead of always looking down at the slide notes.

A large part of our practice on Friday was just getting comfortable with each other and learning the timing and the delivery of each line. We had never given a talk together, so just learning each other’s cues was really important. Creating a rapport between us was really one of the biggest success we had in our talk and that rapport was entirely due to spending a day practicing. By learning each other’s cues we also learned how to respond to each other conversationally. This turned out to be critical because I don’t think we ever said the same line the same way twice. There was a fair amount of “scripted improv”. That is to say, while we would be saying the line, we each would modify the line or the intonation or the pace when we actually said it. It made for a much more comfortable dialog.

Practice also proved out that our method of reading the slide notes would work amazingly well without the need to total memorization. Yea, I did end up memorizing a lot of my lines, but what I had a big problem was was knowing when the line was supposed to come. However, because we had a dialog between both of us, whenever Todd was speaking I could glance at the script and read my next line.

Another thing that we changed during practice was who was running the slides. Initially I ran the slides during our remote practices, but it seemed to get in the way when we were rehearsing together. We tried splitting who was in charge of what slide, but it didn’t work. So Todd just took it over and did a far better job than I had. I think he’s more used to using the clicker and was less worried about timing the slides with the dialog. I also think he really wanted to take over running from the very beginning, but I was unable to hear his desire. Sorry Todd. You did an awesome job running the slides.

GIVING THE SPEECH

Honestly, I have very little recollection of what happened during the speech.

I couldn’t tell you how Tracy introduced us, but I’m sure she did a great job. Everyone knows she’s awesome.

I know a lot of people laughed, which was so amazing. I knew we had a few jokes in there, but never expected our audience to laugh as much as they did. And the laughter started the minute we stepped on stage. I’m told we looked utterly ridiculous. I remember telling myself prior to the talk that “if we got some laughs, just wait a second or two for them to die down before continuing.” Problem was, in the intro, the laughter didn’t stop. People genuinely thought what we were doing was funny. That made the speech for me right there.

After that point, everything was coasting.

I know I made two mistakes, but they weren’t huge and I was okay with that. I remember toward the end of the talk, Todd read one of the lines I was supposed to read. So I read his. And we kept going reading the other person’s line. Nobody seemed to notice, so we went with it. I think the practice really allowed us to do that. Well, that and it was pretty close to the end.

I do remember Zahra‘s game of twenty questions, but having missed all the prior talks due to rehearsals, I had no idea why she was asking us these questions. It seemed odd, but I rolled with it. Turned out to be a lot of fun… I improv fairly well. I think Todd doesn’t do as well and was less comfortable.

AFTERSHOCKS

Let me tell you what it’s like after giving a speech…

First, everything about your talk, all that memorization that you did? all of it gone. My brain literally emptied of everything regarding the talk within a second of exiting the stage. Well, that’s not true… I know all the themes and points we hit during the talk, but the lines we memorized? Can’t think of a single one. A few have surfaced back to memory over the days since, but by and large all the memorized text is gone.

Second, I was full of energy. My body was positively charged and I just wanted to bounce around. We had given the talk, everyone laughed, it was great. I was humming.

People came up to use and said good things. Chris Williams came over and complemented the hell out of the talk. That meant so much to us that he enjoyed it. Todd and I had said that if just one person came up to us after and complimented the talk, then it was a success. That the first person to do so was Chris made it all that much better.

We shook hands and said “Thank You” a lot. We met new people, we grew our community, it was amazing.

And then the crash… I think your body is running so high building up to your talk, that when it’s all over the adrenaline goes away and all you want to do is sleep. I almost fell asleep in one of the talks after ours. Todd was in the exact same state. Power nap time. Its amazing how much your body can recover in a 30 minute nap. Try it some time. 30 minutes is the perfect nap length.

WHAT DID I LEARN?

Don’t agonize over submitting your talk, just do it. You are completely capable to give a talk and people genuinely want to hear what you have to say. Stop thinking about it and go submit a talk. Do it now, I’ll wait.

Don’t give a talk with another person unless it’s absolutely critical to do so. It’s really, really hard to do. We were very lucky in that Todd and I work amazingly well together. YMMV.

Outline before you write, or make slides, or whatever. The outline is they key to organizing your thoughts.

You don’t need a script like we had, but have solid notes that are easy to refer back to.

Don’t make your audience read your slides.  Use images to enhance what you are saying.  But, be careful of the funny image as that can cause people to not pay attention to what you are saying as well.

If your slides depend on timing, make sure your notes have the text that is immediately following your slide change so you don’t lose your place.

You can never practice too much. I think overall, Todd and I rehearsed together somewhere between fifteen and twenty times. When we were not rehearsing I was reading the slides over and over again.

Delivery is everything. The timing, the intonation, the mannerisms, they all play into the performance.

It’s a performance. I learned this from Jan’s blog post (see above) and its absolutely true. Even more so for us where we actually had a script.

If you do slides in Google Slides, export them to PowerPoint or Keynote for the actual presentation. Never ever rely on conference WiFi.

You are absolutely going to make mistakes during the talk; just roll with it. Take a second, move on. Repeat your line if you have to. Like I mentioned above, Todd said one of my lines near the end of our talk, so I said his next line, and for the last 10 or so lines of the talk, we were inverted. We never rehearsed that, so it was completely unprepared for, but we were pros at reading the script by then, so nobody noticed.

As Chris Williams told all the speakers in a conference call prior to the conference, everyone in the audience wants you succeed. They’re your peers, co-workers, friends, and family.

Thanks.

permalink: https://www.arei.net/archives/289
Annoucing npmbox

So, I have problem with npm in my current work. Basically it’s this: the systems I work on are disconnected from the internet but I still need certain node modules. So in order to get a module over to these systems I have to install it locally, zip it up and move that file. But technically, that file is an INSTALLED version, not the raw version of the install. So what I need is the raw installed files of the npm install, and then I need a way to install from those files.

I recently filed a feature request for npm describing this very request. Yet, I don’t think everyone understood the request, plus I thought it wouldn’t be that hard to actually build it. So I did.

Announcing npmbox. A command line utility add on for npm that when given an npm package

npmbox <package>

downloads the package and it dependencies and creates a .npmbox file which is a .tar.gz file of all the raw files necessary to install the package which includes the package, its dependencies, and its optional dependencies.

Taking this .npmbox file, you can then move it to your offline system and issue an npmunbox commmand against it.

npmunbox <npmbox-file>

This will take the box file, unzip it, and install its contents into the current folder.

Thus this achieves the process I am looking for, bundling a package for offline use and then installing from that bundled package.

My sincere hope is that the npm people take a look at the functionality and actually add it directly to npm. I’m not expecting they use the code, just the basic idea. I’m sure there are far better ways to achieve this from inside npm than what I am doing. So have at it.

permalink: https://www.arei.net/archives/274
Hiring Cleared JavaScript Developer, Ellicott City, MD

Generally I avoid recruiting people on behalf of my company, but they have recently asked me to hire a person to work directly for me and to hopefully even replace me in the future.  And, quite frankly, I’m tired of interviewing people brought to me by the company recruiter who have no understanding of what is really involved in writing code.  So, I wrote this way more interesting sounding job description and now I’m looking to find someone to fill it.  Are you that person? Know someone whom is that person? Let me know with a quick email sent to vstijob/at/arei/dot/net.

Are you an up and coming hot shot Web Developer who somehow ended up working on government contracts?  Are you horrified by the state of technology that the government contract you are working on has saddled you with using? Do you wish that your project would stop supporting browsers that are 11 years old and move into the future? Do you want to move beyond writing JSPs, Web Services, and XML? Are you ready to give into the dark gibbering madness that comes with embracing technologies like JavaScript, CSS3, JSON, and NodeJS? Are you ready to awesome-ify your career?

VSTI, A SAS Company, is looking to hire a Junior to Mid-Level Web Developer/Engineer and we want it to be you. Well, we want it to be you if…

  • You have a good basic understanding of JavaScript and are ready to learn a lot more.  And by ‘basic understanding’ we mean that you know what a closure is and how to write one in JavaScript, but you really want to understand why this makes JavaScript so powerful and all the really cool things you can do now that you know what a closure is.  We’re not looking for a JavaScript expert, but someone who wants to become a JavaScript expert.
  • You know more about HTML/CSS than just how to write tags and believe that 90% of HTML should be written using only DIV tags and some amazing CSS. In fact, you should be able to tell the difference between block and inline elements at a glance.
  • You find the possibility to use cutting edge technologies like NodeJS and ElasticSearch to be darkly enticing. In fact, just the act of us mentioning that you could work with those technologies makes you giggle like a mad scientist.  After the giggling has died down you decide to go read the documentation just for fun.
  • You have a serious passion for technology and want to learn more, a lot more.  Ideally, you wish you could learn everything through some sort of cybernetic implant process, but you realize that you still haven’t invented that yet and the only way to learn is to read and to get first-hand experience.

Here’s the obligatory job posting details…

  • You must be willing to work in Ellicott City, MD on a daily basis.  No telework is possible.
  • You must have an up to date TS/SCI Full Scope Polygraph clearance.
  • You must have a solid understanding of JavaScript, CSS, and HTML.
  • You must be willing to be figuratively thrown into the fire.
  • You must be passionate about learning new stuff.
  • You must desire to become an expert in Web Technologies.
  • You might also have an understanding of any of the following… Java, Groovy, Ant, Grails, more than one JavaScript frameworks (like jQuery or Prototype), Agile/Scrum/Kanban, CSS Resets, SVN, Hudson, CSS3, Rally, Apache Tomcat, or Git.
  • If you have a Github account, a blog, or an active twitter account that will help your cause greatly.
  • You might also be competent with some art software like Adobe Fireworks or GIMP, but it’s not a requirement.

In return for all of this VSTI, A SAS Company, will provide you with the following…

  • A very competitive salary.
  • Amazing benefits that you not likely to get elsewhere.
  • Small company feel, Large company resources.
  • An amazing chance to learn and grow your skills.
  • Patriotic pride in what you do (since this is a government contract after all).
  • The opportunity to be part of a team where your opinion is valued and taken into consideration.
  • The possibility of free beer if you like that sort of thing.

So, if any of this sounds cool to you then you should apply for a job.  We’re interviewing now and we’re looking to hire very quickly.

permalink: https://www.arei.net/archives/260
node-untappd v0.2.0

Just a quick note to say that last night I released to Github and NPM v0.2.0 of node-untappd, the Node.js connector to the Untappd API.  This new version supports the latest v4 of the Untappd API including OAUTH authentication.

Lots more details at Github: https://github.com/arei/node-untappd

permalink: https://www.arei.net/archives/258
Introducing functionary.js

I am pleased to release another open source project, functionary.js.

functionary.js is a simple JS polyfill and enhancement for adding additional behavior to functions.  Fundamentally, it allows developers to modify and combine functions into new functions to develop novel patterns.  It is very similar to sugar.js, prototype.js, and the jQuery Deffered object and behavior.  It is sort of like Promise/A but not really like it at all.

Functionary is a prototypical modification to the Function object.  This may cause some users considerable consternation as there is some belief that JavaScript should not be used in such a manner.  I, however, believe that JavaScript was actually designed to do this very thing and should do it albeit with great care.  That said, functionary.js never will overwrite a function of the same name on the Function Prototype.  So if Function.prototype.defer is already created by another Library (like Prototype JS) functionary.js will not overwrite it, but use it instead.

functionary.js works in both browsers and node.js implementations.  However, functionary.js does modify the global Function prototype and should be used with care and full understanding.

It breaks down into two distinct classes: Threading enhancements and combination enhancements.

For purposes of the following discussion, the following things are true:

var f = function() {};
var g = function() {};
var h = f.something(g);

In the above example, f is the bound function of something and g is the passed in function.  Understanding this will help the discussion below be more clear.

Threading Enhancements

While javascript is inherently single-threaded, it is possible to create threaded like behavior using the single thread model and eventing.  This is not a replacement for true multi-threading, but an okay approximation and the fundamental underpinning of javascript and node.js.  functionary.js provides a set of function modifiers for working within the confines of this “threading” environment to produce cleaner and expanded function behaviors with regards to threading and event behaviors.

The following functions fall into this group: defer, delay, collapse, and wait. defer and delay will be familiar to many JS developers, but collapse and wait are the really interesting additions.

collapse basically will collapse multiple executions of a function (call it f) into a single execution. So regardless of the number of times you call f() the actual execution will only happen once… the executions are collapsed into a single execution.  collapse can be used in one of two forms: Managed or Timed.  In Managed mode, the execution of a function is not actually performed until manual execution of the f.now() is called.  Thus repeated calls to f() are short circuited until f.now() is called.  In Timed mode, the execution of a function is automatically performed x milliseconds after the first call to f() and then everything is reset.  This ensures that execution happens based on some period of time.

wait is used to defer execution of one or more functions which are passed as parameters to wait.  Once all those functions have been executed, regardless of results, the bound function of wait is executed.  In the following example f.wait(g,h,i) wait allows a developer to defer execution of function f() until g, h, and i have all executed.

Combination Enhancements

The other aspect of functionary.js is to provide some simple ways to combine functions with other functions in a clean and concise manner.  Sure, it is possible in unmodified javascript to combine functions, but functionary gives you a more clear way to do this.

At the core of this group of functions is bundle which takes the bound function and combines it with the passed in function(s) to produce one single function.  Think of it as taking function f and function g and bundling them together into function h.  Calls to function h then execute f and g in that order.  While in some cases it’s easy enough to just call f and g directly, in others being able to pass a single function like h can be incredibly useful.  Consider this:

var f = function() {};
var g = function() {};
var h = f.bundle(g);
h.defer();

The last line of this code ensure that f and g run together, but after being deferred.

Other functions include chain, before, and after.  These functions are all, in essences, some form of bundle above, although with different orders of execution.

functionary.js provides two new functional wrappers, completed and failed.  completed ensures that the passed in function will only execute if the bound function returned without an exception.  failed is the converse and will ensure that the passed in function will only execute if the bound function DID return an exception.  These two functions offer a unique approach to try/catch paradigms.  Coupled with bundle, a try/catch/finally model is easily possible as shown:

var compl = function() {};
var fail = function() {};
var fin = function() {};
var f = function() {};
var h = f.completed(compl).failed(fail).bundle(fin);

Finally, functionary.js offers a wrap function akin to sugar.js and prototype.js where once can wrap the passed function around the bound function.

Documentation

Please see the documentation on github for all the details of using these functions.  Also, play with the code, it probably won’t bite.

Installation and Details

You can get functionary from github.  I encourage you to check it out, fork it, and submit changes.  The ultimately goal is to make functionary the go to polyfill/add on to JS for functions.

 Thoughts, Comments, Suggestions?

If you have thoughts, comments or suggestions, please put them on github or contact me on twitter at @areinet

permalink: https://www.arei.net/archives/254
Adding Last Beer and Last Tweet to a Website

So, I updated the site today adding two new features that I have wanted to add for a long time: Last Beer and Last Tweet.

Last Beer is a peek into the last beer that I consumed and checked into on Untappd.  Don’t know about Untapped? It’s basically foursquare for beer and if you aren’t using it, you are really missing out.  I’m a big fan and regularly use it to track my beer consumption and history.

Last Tweet is, as imagined, the last thing I tweeted, retweeted or replied on twitter about.  It’s about as basic as it comes.  I am a huge twitter fan/user and if you are not, you should be.  Twitter is where it’s at and Facebook is incredibly lame.

The really cool thing about these two features is that they are both built and running on node.js.  What’s more, Last Beer uses the Node Module node-untapped which I wrote back in march as my very first node module.

I will say that getting node.js up and running in my hosting environment was a big PITA, but only because my hosting environment in a shared hosting and running server processes is a big no no there.  I ended up having to buy a VM and run it there.  That meant a fair bit of research and what not on which service was better and cheaper and all that, but I finally settled on one in April.  Once the server was provisioned getting node.js running was trivial.  It’s really been a pleasure to work with once you figure out what you want.  Perhaps I will blog about the experience in more detail at a later time.

http://untappd.com/user/arei

permalink: https://www.arei.net/archives/247
5 Interview Questions for finding Software Gods…

So, I have, from time to time, been asked to interview various candidates, despite not having much say in the hiring process in general.   Mostly, I’m asked because my employers want me to tell them if the candidate is technically competent or not.  But for me, technically competency is only half of the picture.  I want competent, but I also want passionate, and that, is by far the hard trait to find. Passion is the difference between merely writing code and breathing code, the difference between a cog and a creator, the difference between collecting a pay check and committed to the future.

To that end I have developed a series of questions that helps me narrow the pool some.  These are questions aimed at gauging beyond competency, but the passion one brings to the team.  Without passion, you are just a robot and robots are a dime a dozen… (especially in government contracting).

So, here are my top five interview questions for finding the kind of people with which I want to work.  These are all related to software development, but I’m sure you can adapt these questions to whatever industry you work in. I don’t really have an opinion on your industry, so you are on your own.

Question 1). Describe the coding you do outside of work.

The answer a candidate has for this isn’t nearly as relevant as the fact that they have an answer for this.  A candidate who is not programming outside of their work structure is, generally speaking, not in love with what they do. If you don’t love what you do you might as well not be doing it, you are certainly not worth my time of investing in you.  I will give some credit to someone who describes what they would like to be doing outside of work, but they cannot find the time because of family.  I know families take up a butt load of time, so I respect that.  Often times when I am interviewing this question is asked obliquely in the form of “What’s your Github account.”  It’s the same thing really, although this will also let prospective employers view your work directly as well.

Question 2). Tell me what websites/sources you regularly read in order to stay current in your field?

This is a great question to separate the amateurs from the experts.  If you are not reading multiple regular sources to stay relevant in your field you might as well retire. Every field and industry has sources devoted to keeping you up to date in your field, follow at least three and stay on top of it.  When a candidate fails to answer this question I pretty much stop right there.  This is even more true for technology people than others, but can be applicable in other fields.  For example, my fiancee Jennifer is in Acoustics and she regularly reads science journals to stay on top of her field.  Technology just makes it easier as there are thousands of sources.

Question 3). “I see that you list Library X on your resume.  How would you change X and make it better?”

This is one of those dual purpose questions… You can answer this technically, but any answer you give will also show whether or not you have passion to make things better.  I can judge both by your answer here.  The worst possible answer though is “I really love the way X does this and wouldn’t change it at all.”  If you don’t want to change X you probably aren’t using X very much.  Actually, there is a worse answer than that and that is the answer where you fumble around trying to find an answer which tells me you lied on your resume.  Fail.

Question 4). “Who are some of the movers and shakers in your field that you like and why do you disagree with them?”

Again, this question goes to whether or not the candidate is staying current.  But more importantly, it delves into the candidates opinions and ultimately, passionate people have strong opinions.  You might not agree with them, but you cannot fault their passion and commitment to a point of view.  The follow on questions from this one can really be intense as well and give an interviewer a great chance at delving into the prospective employees views.  Don’t miss out on the chances this question can open up.

Question 5). “What’s the worst thing you’ve ever written and why?”

A great question that challenges the interviewee to examine their own work and then defend it.  This can be followed up with all kinds of interesting question to get at problems or desires, success and failures.  Passionate people not only recognize their failures, they have loads to say about how they would change things.  A person who doesn’t think they have improvements to make on their own work, is one I don’t want to work with.  Continually striving to better oneself is a must.

So there are my five questions.  As with all Interview questions, these are incredibly leading question which allow for a great deal of follow on and discussion.  And they are meant to be that way.  A single interview question isn’t an end, but a beginning to a discussion.  Ultimately, these question do not guarantee that you are going to find the perfect person.  Rather, they serve as a guide to allow you to delve into the depths of what really excites a person about the work that they do and the work you envision for them.

And there isn’t a score here for questions they got right.  All of these questions lead to discussions that will reveal the characteristics that I am trying to get at, namely whether or not the interviewee is passionate about their work.  Each interview is obviously different and have to be judge differently.  In the end it really comes down to a gut decision.  The hope here, however, is that these questions can lead you to better gut decisions.

The goal here is to determine a persons passion, their excitement for their job, because a person who lacks that kind of edge is really not worth the time.

 

 

permalink: https://www.arei.net/archives/248
Why Dependency Management Pisses Me Off

Yes, it’s true. Dependency Management Pisses Me Off. Jason van Zyl over at Sonatype needs to be kicked in the groin… repeatedly. (Sorry, I don’t really know Jason and it’s not nice to say such things, but I wanted to really hammer home the point. Jason, I apologize to your testicles.) Seriously, when did we become so amazingly lazy that saving a JAR file into our SVN repositories became a big deal?

Now, I don’t want you to just think I’m some raving lunatic out there on his soap box shouting into the wind despite the accuracy of this picture; I want to at least pretend that you are not going to scream “TL;DR” and actually read the damn posting, so here is why Dependency Management pisses me off. Feel free to reply back to me on Twitter (@areinet) and I will engage you in some spirited debate. And I promise I won’t hurt your testicles in the process.

1). Dependency Management adds unnecessary complexity. Are we not just talking about saving some files into our SVN repository after all? Why is that so hard? And who on earth thinks that writing and changing a pom.xml file is actually easier than this? Also, there are people who would say that you shouldn’t be committing built objects into SVN (or whatever) and that we shouldn’t waste disk space. To these people I say this: Disk space is cheap. Seriously, the going rate for a HD is around 9gb per $1 USD. I assure you, no matter how many JAR files your project needs this is incredibly cheap.

2). Dependency Management puts someone else in your build loop. When I build a project, I want to rely on as few people as possible to fuck things up. Yet Dependency Management injects a completely incalculable third party into your system, and that’s just for one dependency. Sure, that dependency is always there, but with external dependencies, your are practically begging for your build to break because John Bozo three countries away from you removed six bytes of code from that one project you were relying on. Now, of course, you shouldn’t be using LATEST in your dependencies, but I really don’t want to rely on the fact that our build people are smart enough to realize this. If I just committed the version I wanted to the repository, none of these problems happen.

3). Licenses Change. When your build person goes out there and changes a version number, do you think they actually read a license file? Let’s assume for the sake of reality that people are incredibly lazy… now, do you want to take the risk that so and so actually read the license? And that the license didn’t change? Seriously, it would take me about six seconds to change the license on some sub-project you use and then commit it. And suddenly the sub-project owns all of your IP simply because you used them. Now, the legality of that is a debate for other scholars than I, but it could certainly cause a mess. The only thing stopping a sub-project from doing this,is the hassle of suing your ass into oblivion… And we all know that people are getting more and more litigious every day.

4). The Internet is, by it’s nature, unreliable. Do you really want to rely on the fact that the internet providers upstream from you are not going to screw something up right when your absolutely must deliver build has to get run? Seriously, the more people that have input into a process, the more likely that process is to get derailed. I do not want to think that my ability to deliver is dependent on whether or not Anonymous is going to cause a world-wide outage in protest over SOPA (which sucks by the way). Sure, you can run a mirror of x repository and spend your time maintaining that as well, but wouldn’t you rather spend time, oh I don’t know, outside? With a girl? playing WOW? Doing anything else?

So the point here is this and this is the TL;DR for you lazy people as well… Dependency management adds both complexity and unpredictability to your systems and this is not a good thing. A Build process is about Rigor, and Dependency Management is antithetical to rigor. By using a Dependency Management solution you are willingly signing up for problems and extra work. Who wants that? When given the choice between that and just storing the files I need into my repository, I will choose the latter every time.

Now, I do think some dependency systems are way better than others. The node.js NPM system is amazingly clean, but it’s still begging for the problems I outline above. So, maybe not that awesome. It is easy to use though, wish Maven were half that easy.

So, that’s it. That’s why every time my coworker comes in and raves about how awesome Maven is I just point at his crotch and start laughing. I mean, really, Maven? Awesome? You got to be an idiot to think that. (My apologies to my coworkers.)

permalink: https://www.arei.net/archives/246
My Ideal Job

Lately, I’ve been asking myself if my current work role is really the best use of my talents.  But shortly into wondering about the answer to this question, I had formed an even more important question: What exactly do I consider the best use of my talents.  So, here, for better or worse, is where I think the best use of my talents lies…

First and foremost, I’m a hacker type through and through.  A work day in which I am not writing code is a terrible day for me.  A work day in which I write a little code is a terrible day for me.  A workday where I am heads down, balls to the wall buried in code and completely oblivious to the passage of time.  Ding!  Awesome day.  What’s even better about those kind of days is that when I’m in the zone like that (Interface Designers call it “Flow”) I am disgustingly prolific. I mean oodles and oodles of code being churned out.  That’s a win for not just me, but the company for which I am working.

Next, I am a very creative person.  This means I get strength and energy from creative outlets.  I am not going to be your go to guy to write that piece of software for which you have painstakingly provide pages and pages of detailed specification.  No, I’m the kind of guy you come up to and say, “Hey, I had a friggin awesome idea, can you whip something together for me to test the idea out?”  I will take your crazy ass idea and run with it.  Now, the result here can be mockups or it can be straight to code, I’m comfortable either way, although I think I’m more productive in code, but whatever.  Just give me an idea that I can contribute to and put my own spin on and I will exceed your most wild expectations.

Also, I love writing reusable components and libraries.  Recently Jacob Thornton an engineer at Twitter (@fat) shared a tweet at JSConf 2012 that I really found interesting.  He tweeted, “new interview question: you have 45 minutes to write JQuery from scratch. get as far as you can. start from wherever you’d like.”  I absolutely loved this question, not just because I think it would really separate the wheat from the chaff as it were, but also because I would love that challenge.  Ultimately @fat concluded that anyone trying to answer that question would be screwed because there’s so much depth to JQuery, but I would absolutely love to try.  I could spend a lifetime reanswering this question over and over and over, getting it more and more perfect each time.  I have been fortunate in my career that the work I am asked to do more often than not has limitations that prevent us from using certain libraries.  I get to go in, study those libraries and then recode them for my project.  It’s quite enjoyable and amazing enriching.

Finally, I love sharing my knowledge with others and learning their ideas and knowledge as well.  Mentoring to me can be quite a lot of fun and there is nothing better, IMHO, than a willing and eager student/peer that wants to learn or wants to debate.  I love that sort of thing.  The caveat, though, is that these activities must not take away from the above two activities.  I like mentoring some of the time, but when it becomes a full time job, when I start managing, that’s where I lose interest.  Surprisingly, I am extremely good at managing and have had a number of management position in the past, but ultimately, I have no interest in doing it in the future.

So, all that said, my ideal job is Hacker and Evangelist of Prototype Libraries. Now, I know that’s probably not a real job (if you think differently, please send me an email!), but that’s what I would ideally like to be doing. And it’s pretty cool that I’ve figured it out.  I now have a benchmark by which I can hold up two jobs and ask, “How much does each of these jobs approach the ideal for which I have set myself.”  That’s the job I’m more likely to take, that’s the job I want.

So, current job, how much do you think you live up to my ideal?

permalink: https://www.arei.net/archives/245
node-untappd: A NodeJS Library for using the Untappd API

Last night in anticipation of the upcoming JSConf, I released version 0.1.0 of node-untappd.  node-untappd is a NodeJS API Client for using Untappd services. My hope is that someone out there might use this to make some really cool things that they will then share with me and offer to buy me beer for my hardwork.  We’ll see how that works out.

If you are unfamiliar with Untappd and drink beer at all, you need to make yourself familiar.  Untappd is a socail tracking application for beer.  Think of it as FourSquare for beer.  Users checkin, rate, comment, track, and share beer consumptions and notes.  It’s an awesome little tool for keeping track of exactly what you are drinking, where you are drinking it, and what you thought of it.

node-untappd connect into the Untappd application by exposing the Untappd API to your Node application.  You can do all kinds of queries, checkins, comments, and the like right from within your own tool.

You can install node-untappd by using

npm install node-untappd.

Make sure to read the README.md file

To learn more about node-untappd or see the source code at the github repository: http://github.com/arei/node-untappd.

To see the API details go to: http://untappd.com/api/docs/v3.

Enjoy!

 

permalink: https://www.arei.net/archives/240
Easing NodeJS into the Future

Let me get this right out of the way. I’m a fan. NodeJS is really cool, easy to use, and feels just right the minute you try it out. I’m all in.

Now for the bad news…

There are some problems that NodeJS is facing this year. The upside though, is that these problems can be easily addressed. I only hope it is not too late… but maybe with a little effort, a little organization, and a whole lot of additional groundswell, we can propel NodeJS forward by a giant leap.

Hello, World

The very first time programmatic encounter a new user of NodeJS will have is the Hello World example. This is a universal concept across all languages, the entry point is always Hello World. Hello World is the single most simplistic concept in writing a program in any language. It is fundamentally the most basic thing you can do.

The NodeJS Hello World example:

var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');

The problem is that the NodeJS Hello World example basically says that NodeJS is all about building Web Servers and in my opinion that is way too narrow. NodeJS is absolutely awesome at the Web stack, no disagreement. Yet, I believe that NodeJS is so much more and has nearly limitless potential: today we can build CLI programs, statistical analyzers, and stand alone applications all in NodeJS and not once do we need to create a Web Server to do it. By defining our most basic of examples in the terms of a Web stack we are defining our entire ecosystem in those terms and that will continue to limit our potential. It is time for NodeJS to grow out of that perceptual constraint and I believe it starts with Hello World.

My solution, one line long:

console.log(‘Hello World’);

Once you get users in with one basic example, you move on to the next one and the next one after that. Absolutely we should be teaching our n00bs how amazingly simple it is to host your own servers in NodeJS, but NodeJS is so much more than that. We need to appeal to all the use cases, not just those who want to build servers. So show us how to do it all.

Which leads to the next point…

Documentation

One of the big problems of the documentation is that it is trying to serve two different masters and it need to separate it’s interest. On the one hand, it wants to be a teaching tool, helping new users through common problems and examples. On the other hand, it needs to be a resource document to which the more experienced users can turn. Regardless, both of these objectives is utterly necessary, but also at odds with one another. So lets split them apart but keep them cross referenced with one another. So the reference has pointers to the examples and the examples cross link to the references.

Once we have separated the concerns from one another, then the community can put some real effort into beefing the documentation up big time. With regards to the examples, we need to put some serious effort into teaching our users event driven programming and how it works, why it works, where it is good, and where it is bad. On the reference material side of the house we need to flesh stuff out: every single function, identifier and object needs to be described and commented upon. Also every callback needs to also be defined with exactly what is being passed into the callback when it is fired. I swear I spend half my time outputting arguments from various callbacks to understand what they are before I can actually use it.

There is literally thousands of examples on the Internet about how to use NodeJS and thousands of people willing to share their insights. So let us put all that collective talent to work creating an amazing system dedicated to teaching the technology we are all so passionate about. Everything from video tutorials (like that one on youtube), to cross referenced API documentation, to examples to do virtually everything we can think of, to sample applications and configurations. There needs to be a one stop shop to all things NodeJS and it’s name is not Google. I want to know how to do X and I do not want to hunt all over the place to find it. It is all about making NodeJS more approachable and easier to use.

And with that segue we move on to…

Startup

Simple NodeJS startup is wonderfully easy, just type node and go. Or you can get more fancy and supply a filename to start execution. The reality, however, is that most of us are not working in a simple world. We work in complex, custom, evolutionary, hybrid environments that defy description. I know this first hand, I’ve tried to describe them, it’s not pretty.

So we need to make starting NodeJS easier. After all, if it seems like its hard to do (whether or not it really is) we are not going to do it.

See, System Administrators today have a lot invested in their current solutions. They’ve been using Apache Web Server (for example) for almost two decades. They have put a ton of work into whatever cobbled together solution they have. Asking them to change, while great for progress, is just asking for a whole lot of argument. So why not make NodeJS fit into the architecture they already know and love? Why not provide them options and at the same time, show them how easy it is to love NodeJS.

This comes down to three different ways NodeJS needs to run:

First, some people just want to run NodeJS. We got this one covered today. It’s easy, it’s powerful and it has a lovely command line interface built in if you want it.

Second, some people want to run NodeJS from a Web container. This is basically the PHP or the JSP model where NodeJS runs behind a Web Server and the Web Server sends specific request too the NodeJS as it needs. There are dozens of protocols for doing this (CGI, FastCGI, AJP, etc) and implementing several of these should be and is pretty easy. A few github projects do this to varying levels of success. The ultimate goal though, is to bring these things right into the runtime so things are as painless as possible to setup. It could be as simple as just adding a command line switch to the NodeJS runtime to tell it that the incoming request is a CGI one or something. I am not trying to implement here, just throwing ideas out.

Finally, some users just want to run NodeJS as a robust service. For example, anyone whom wants NodeJS to be the only Web Server and does not need to rely on third part tools. To do so, NodeJS needs to ship with the code and tools necessary to working with full fledged services. Upstart and Forever are two tools to help with this, but why can we not put this technology into the runtime? This speaks to making it as easy as possible to get setup and rolling the way a user wants to get setup and rolling. And let us not forget not everything runs on Linux like it should; Windows still has its proponents and we need to be more approachable to everyone. Ideally integrated technology for keeping a server up, running, and monitored would be ideal. As more and more NodeJS users look to deploy NodeJS into production, easing this process becomes more and more critical.

And I’m Spent

So that is about all I got so far. I’m sure there are dozens and dozens of other things that could really help NodeJS grow, but to me these are the big ones. Yet, these are also the ones that I think can be fixed right now.

Ultimately, we need to make NodeJS more approachable, more understandable and more reliable. Today NodeJS is crazy popular, but I believe we are rapidly approaching a turn which can direct NodeJS’ fate for the years to come. It is the classic dilemma for any fledgling technology and the roadside is strewn with the corpses of those that have come before us. I honestly believe that this community can steer NodeJS to greatness, if it is willing to do so. This involves hard work, it involves decisive action, and most importantly, it involves foresight to see what is to come. If we, as a community can accept this role, NodeJS will explode to heights even we failed to imagine.

permalink: https://www.arei.net/archives/239
Dear Stupid Recruiter…

Let me be blunt: When you email me to tell me all about how you have positions and could I just call you to find out more details… ya, that just pisses me off.

I am a busy individual. I have a really good job. If you want to have any hope of luring me away from that really good job, you have to be (or offer) better. And I’m not talking about money here. I’m talking about better understanding, better service, and better opportunity. You have to understand that in our industry (software) there are more positions out there than there is talent; you have to understand that I have no interest in talking to you about the same boring job every other recruiter is pitching; you have to understand that YOU are trying to use me to make money and therefore have to provide ACTUAL VALUE to me.

Let’s take a recent example… I got an email from a recruiter telling me that his company had available positions in Infrastructure, Software Development, Integration, Engineering, and Information Assurance, and if I would like I could call him for the details… DELETED. That’s right, straight to the trash can with that email. Why? Because everyone has positions in Infrastructure, Software Development, Integration, Engineering, and Information Assurance available. Oh, and also because I’m not going to waste time talking to you about a generic job posting.

So how about a better example… Well, one recruiter piqued my interest with the generic sounding descriptions, so I emailed back and I share with you what I wrote:

Dear XYZ, I am very intrigued by the positions you listed.  I would love to hear more about the specific position you have in mind for me.  Could you email me some details and I will let you know if I am interested?  I’m very busy right now, so email is the best way for us to communicate, if you don’t mind.

The answer I got back was:

Dear Glenn, you can reach me at xxx-xxx-xxxx.  I’d really like to talk to you about this opportunity.

Okay, for starters, spell my friggin name correctly.  You misspell my name, automatic trash bin for you. Secondly, I’m not going to call you until I am convinced you can provide me value.  You’ve already failed to understand that people are busy and it doesn’t look good for our relationship. Finally, actually take the time to READ the email I sent you.  The fact that I sent you one at all, considering how many recruiter emails I see during the course of the day is amazing.  You should hang on my every word.  Seriously.  I would estimate that I pretty much delete outright 95% of the emails I get from recruiters, and the other 5% gets deleted after one email exchange.  That’s pretty sad.  You can do better.

So how, as a recruiter, does one actually do better?  I’ve wrote you a list.  (Now consider this… I am willing to spend 30 minutes more writing a list about how to do better than I am willing to spend responding to your lame emails.)

1). Actually read my resume and have the technical ability to understand it.

Yes, I list Java on my resume just like everyone else. But, if you actually took the time to read my resume you would see that I don’t actually list java in a job for the last 5 years, but rather there’s a lot of talk about JavaScript.  Now, which type of technology job do you think I am looking for? The one from five years ago which is slowly dying or the one that i have been working in for the last five years that’s hot as shit right now?

2). Stop trying to appeal to the morons.

A lot of recruiters just go by keywords and the mass mailing approach to getting clients.  Maybe this works for some, but it will never work with me nor anyone whom considers themselves my peers.  Believe me,  I can find a crappy, middle of the road job by myself, I don’t need you.  What I need you for is to find me that one job that is way over and above what I can find myself. All your offering me is, to quote Hamlet, “Words, words, words.” I want the dream job, not the boring average job. Which leads me to my next point…

3). A job that is described in keywords is not a job in which I am interested.

Jobs descriptions are marketing tools.  If you want to sound like Budweiser and says “We taste just like everyone else!” then bully for you, but I am still not drinking it.  Why would I when I can consume an experience like Heavy Seas Loose Cannon or Boulder Vanilla Porter.  Spice it up and really try to sell it… without using the same damn terms everyone else is using. Be creative… I know, TV has killed all creative instinct in you, but surely you must remember something from being a child.  Try it out.

4). Take the time to be right.

Grammar and spelling mistake = Deleted.  No exceptions.  If you cannot be bothered to wordsmith simple emails, I cannot be bothered to read them, and I’m certainly not going to trust you to be accurate when representing me to a customer.

5). Listen.

If I tell you in an email that I am really busy, try listening to me and working with me through email.  Stop trying to get me on the phone.  And worse yet, do not send me your form to fill out.  I’m not applying to a job at Burger King.

6). DO NOT copy and paste a job description.

Google may be the best thing you ever found for finding leads, but it’s also your enemy when it comes to job descriptions.  I am willing to bet that I can find the company hiring directly and circumvent you (not that I ever had) just by using Google and the cut and paste you just did of the job description.  Yes, it might take a little longer to recreate the job description and worse yet you might have to actually understand technology to do this, but it shows me that you are actually trying.

Now, I know many of you recruiters have reasons why you do what you do and I really want to believe it’s not just because you are lazy.  So let me try and answer some of your concerns.  (I’ll add to this if you email me some constructive feedback.)

I need to reach as many people as I can  – Go watch the first 10 minutes of Jerry McGuire.  Now, watch it again and this time try listening.

The only way I have to understand you is your keywords – That’s great.  Use keywords to find me, by all means, but then take the ten extra minute to actually read what you found.  Bonus points if you actually look at any other part of my blog while you are there.  Actually take the time to try to UNDERSTAND me.

I don’t have time to read through all the resumes I see – Make the time.  Quality not quantity, if you think different than I salute your mediocrity.

Staying on top of Technology is hard – Yes, yes it is.  Yet, some of us manage to do it just fine.  Twitter can be your best friend here.

Company X wants its job listed as Y – So what?  Eventually sure, share that with me… but for initial contact, sell it.

I can’t afford to spend all my time on you – Then I cannot afford to spend time on you.  Remember, you only make money by me changing jobs, which is a ridiculously hard thing to get people to do.

Quality is nice, but Quantity pays the bills – But quality builds reputations.  Take for example where I live… there is nothing but chain restaurants here with a few notable exceptions and I have never been known to espouse the amazing food I just had at a chain restaurant.  With a few notable exceptions. Even then, it’s the quality I’m espousing… not the quantity. If you want to be the kind of recruiter that people tell their friends to go to, then quality is a must.

When it gets right down to it, I just want a recruiter whom I not only trust, but that I know I can return to if I need to.  Someone who understands my SPECIFIC needs and desires in the workplace and doesn’t just want to represent me because of the money, but because he or she is actually helping me succeed.  In twenty plus years I have only met two recruiters who meet those goals.  And I keep in touch with both of them.

 

permalink: https://www.arei.net/archives/232
Quick Answers to a Hard UI Question

The Question

A friend of mine recently asked the Twitterverse the following question: “Can anyone recommend a good book on designing a good software UI? What works, what doesn’t, and in which situations.”

The Simple (but ultimately unhelpful) Answer

Keep reading for the real answer.

The Difficult Answer

I love it when people ask me this question, because it means that they are actively thinking about the Interface and they want to improve. I applaud their willingness to change. Unfortunately, wanting to change and reading a book (or two) will not get them the results they desire.

User Interface Design is a huge field of study, a speciality of decades (even centuries if you talk about Information Design) of research and learning.  There are undergraduate and graduate programs around the world that teach only this subject.  Succeeding in one of these programs is only the beginning.  Experience is what really counts in this field. A person, my friend or any person, is not going to learn this subject by reading one book.

It’s akin to me going to the bookstore and buying “The Complete Idiot’s Guide to Starting Your Own Business”.  Sure, this book will tell you the basics and give you some insights, but it’s only the tip of the iceberg.  Starting and running a business is an extremely complicated process and to think one book is going to get you there is woefully short sighted.

My problem with the original question is that it is naive.   It is naive to think that simply having the rules will allow you to effectively apply those rules.  It is naive to under-value practical experience in this field.  It is naive to treat an entire field of study as an after-thought.

The Real Question

The real question being asked is “Are there a couple of things that I can do to make my interfaces better?”

The answer to that is No … and Yes.

No, because as I’ve just said, there is no way to summarize quickly an entire field of study or years of experience.  Do not under-estimate just how valuable these things are to someone practicing in the field.

Yes, because I believe there are a number of tips that everyone can use to make their interfaces better from day one.  I’ll go into these really briefly, but I want to be very clear there’s a whole lot more to it, a whole lifetime of learning if you are willing to do it.  It’s an amazing, wonderful field and I thoroughly encourage everyone to study it.  Just be warned that it’s big, complex, and sometimes very unrewarding.

Remember: Interfaces are Everywhere

Anything people interact with is an interface: A Dictionary (the physical book kind) is an interface, a web site is an interface, a form you fill out for your employer is an interface, and a software API is an interface.  Each interface needs to be designed for the user.  So the next time you design something that someone else is going to use, or even for your own use, consider: how the interface works, how easy it is to use, and whether or not it meets your needs.

The Real Answer

While I encourage you to go and read the above books, if you do nothing else, keep the following rules in mind when you are designing any sort of user interface.

Consistency

The number one thing any software engineer can do to make interfaces better is to make things consistent.  I cannot begin to tell you how many interfaces I see that are inconsistent. (I’ve even made this mistake myself a number of times.)  If you do something one way in one part of your interface, always do it that way throughout the entire interface.  For example, if the Okay button is on the left and the Cancel button is on the right, do not change the order of these things somewhere else in your interface.

This also means adhering to the consistency norms defined by your Operating System or Operating Environment (a Web Browser is an Operating Environment).  Yes, you might not like it and you might think you can do it better, but the interface is NEVER (underlined and bolded) about you.  NEVER.

Details, Details, Details!

Anyone who does Interface Design should be horribly detailed oriented, almost compulsively so. Every aspect of your interface needs to be examined to ensure that the details are, going back to my previous point, consistent.  Form fields should be the same size, buttons the same size, everything aligned correctly, everything positioned perfectly, etc.  The details of the User Interface are the critical difference between good work and sloppy work.  And sloppy interfaces are bad interfaces.

Nothing annoys me more than going to another company and filling out their poorly designed forms.  (My current company is especially bad at this.)  These, as I mentioned before, are interfaces and spending a little time to make them more clean and more clear is worth its weight in gold.  I have, in the past, walked out of companies that were interviewing me merely because their HR forms sucked.

Know your Users

When beginning your design the first thing you should be asking yourself is what do you know about your users.  Shniederman and Cooper (the books above) can tell you a lot more about Actors and User Stories and all that, but it really just comes down to understanding how your users like to work, and how your interface is going to make some aspect of that easier.  So, get to know your users.  Are your users computer illiterate? If so, it’s not likely that they will understand something like Drag and Drop right away. Once you understand your users motivations and needs, then you can begin to design a system that best reflects them.

This is often very tricky because none of us have millions of dollars to do user studies, and shadowing, and user testing.  A lot of times our customers are abstract visions of customers.  That’s okay.  Just take some time to try and imagine (acting or role-playing training can be really helpful here) what those customers motivations and needs would be.  It’s not perfect, but it will do in a pinch.

Ease of Use

So it goes without saying that the easier to use an interface is, the more people will like it.  Of course, this must be tempered against the motivation and goals of the actual users.  So the real goal is that it must be easier for the users to do what their motivation and needs require.  I once read that Ease of Use can be defined as the number of mouse click or keyboard interactions required to perform some task.  It’s not a perfect measurement but keep it solidly in mind when designing.  And this segues into my next point…

People are Lazy

Assume that people are lazy and you will never be disappointed.  They want to do the least amount of effort for the greatest amount of payoff.    I call this the commitment factor: how much of my effort do I have to commit to receive the greatest payoff.  This is why the Lottery is so effective.  It does not seem to matter that statistics are stacked against the players, they still play because it’s easy to play and the potential reward is huge.

In interface design this is equally true.  The users will like any interface that makes things the easiest.  The converse of this, however, plays a valuable role as well… the users will like any interface that makes things easier, so long as they can control the results.  This means that lottery users like playing the lottery, so long as they can pick the numbers.  The more complicated the system to automatically pick the numbers, the less the users will like the results.

Ideally, the best systems are predictive systems that let the users control just the right amount of variables.  What is the right amount, well, that where iteration comes into play.  Try a low amount, try a high amount, calculate the best amount, and then keep refining.

Understand Flow

Cooper defines Flow as “When people are able to concentrate wholeheartedly on an activity, they lose awareness of peripheral problems and distractions.” (Cooper, 4th Edition, p119). We’ve all had these Flow moments: where what we are doing is so focused, so in-depth that we don’t notice external things such as what time it is, coworkers leaving for the day, or even phone calls from our significant others wondering why we are not home yet.  This is Flow, and it’s a very, very good thing.  Flow allows users to work on their specific need at an optimum level.  It is the goal of every good interface.

Poor interfaces interrupt flow with things like unnecessary dialogs, errors, hard to use process, etc.  The interface that interrupts less and is easier to use helps to encourage Flow.

As part of Flow I generally include visual flow in the discussion.  Visual flow is the ability of the human eye to find what it needs.  To this end, interfaces that focus or showcase what is most important to the user are better.  In the western world we read top to bottom, left to right, so items on the top left receive more attention than what is in the bottom right.  Keep this in mind as you build your interface.

Also, be aware that animated things attempting to engage or grab the users focus on your interface ALWAYS disrupt flow.  Use animation to enhance, never to engage.  Assume users have become oblivious to animated, blinking things and largely screen them out these days.

Design for Accessibility

One of the big failings of modern day design is that they fail to account for differences in human beings.  Some human beings cannot see, some cannot manipulate a mouse, some cannot determine the difference between red and blue.  Build for accessibility.  Be aware that some people view your site in really low resolution and that some view it in really high resolution.  Account for the differences in your fundamental design and from the beginning.  Going back and having to engineer your sight to meet section 508 standards can be extremely painful.  Do it right from the start.

One of the big things here is when sites use color to indicate differences.  Estimates seem to place color blindness in the US as 10 to 20% of the population.  Therefore using a color to indicate that some change has happened is not an acceptable solution.  When in doubt use a color change AND some other indicator (selection count, underlining, etc) to indicate response.

Feedback

Feedback is the process of responding to user behavior.  The more feedback, the more the user knows that they are doing things.  A common flaw is to do something without providing feedback to the user that something is occurring.  We see this in lots of User Interfaces because almost all User Interfaces rely on a single thread model wherein the interface rendering and response happen on the same thread as most processing.  The developer who pushes processing off into other threads (or into WebWorkers in the Web space) can respond with appropriate feedback to the user without waiting for the process to resolve.

Feedback is a key factor in responsiveness of a site and responsiveness is a key factor in a sites usability.  The more response a site appear, the more users feel like they are in control of how the system is behaving.

You Cannot Please Anyone

Just assume that no matter how great your user interface, not everyone is going to like it.  Instead, your goal should be to hit the 80% of user whom will like it.  There will always be edge users whom have different motivations and needs.  So upfront, identify all the users and determine what the 80% is that you can achieve.

Sure, it is possible to build an interface that scales to every type of user.  However, you will spend a disproportionate amount of time on the last 20% than on the middle 80%.  Think of a bell curve,and try to get the middle of that curve.

I know two sections back I just said to design for differences, but there is a separation between accessibility difference and designing for the edge users.

Learn from your Mistakes

Finally, learn from your mistakes.  I always believe that next version of your interface will be superior to the previous version, largely because you learn from the problems your users had with the current one and build a tighter interface for the next one.

As a co-worker of mine often says: It’s an Iterative Process.

Conclusion

So that’s what I have for you.  My long answer to a friends very simple question.  I hope I did not insult my friend, but the reality is that things are much more complicated than his initial question assumes.  That said, maybe my last section really answers the question he wanted answered.  Remember, User Interface Design is extraordinarily complicated.

A Final Note:  This topic does not take into account the whole Graphic Design aspect of UI design.  For that is an entirely differently field of study.

permalink: https://www.arei.net/archives/233
Tweet and Like

Today I removed comments from the site. I replaced them with tweet and like (facebook) buttons. Although I still maintain my stance that Facebook is dumb. Twitter on the otherhand, is the awesome.

So, if you see something you like here, tweet about it. Or Like it if you swing that way.

permalink: https://www.arei.net/archives/229
Developer Drift

Lately I’ve been the subject of what I call developer drift.

Developer Drift is the process by which an unchallenged developer slowly moves from one project to the next. The project may be in house or external or something completely fabricated by the developer’s mind, but it basically means a developer is less interested in the current project than the project over the horizon. It’s the “Grass is always greener” truism made concrete in software engineering.

For me, this has taken the form of the fact that our customer wants really boring user interfaces which I can crank out like they are nothing. Problem is, I almost never crank them out, because they are meaningless and I never feel challenged/creative by them. (For me challenged=creative.) So I take forever to implement them and tend to make a lot of excuses on why this is taking so long. I feel justified in why it takes so long in the fact that when I am challenged, I really do crank out the code at an extraordinary rate which borders on the obscene when compared to average developers. I’m very prolific when I want to be.

The same thing was true when I was back in college studying English Literature. I could crank out a paper that I found interesting in no time flat, but assign me something that was pedantic and I’d sooner rip my own teeth out with a spoon (“because it will hurt more”).

So the real question I’m trying to find an answer to is “How do you deal with developer drift?” How do you stop people from losing interest when they are bored because the project has become boring? A project I used to work on is suffering from this very problem… they want to keep the team together, but the more boring stuff they do, the less likely they are to be able to keep the team together? Is there a way for this project to challenge it’s developers at the same time as doing boring things? Would contests or “feats of skill” help keep things from getting stale? Or should they just accept this as the life cycle of the developer, assume that people are going to drift away, and prepare for the next generation?

You tell me… how does your project deal with developer drift?

permalink: https://www.arei.net/archives/213
Maven Vitriol

I’m an ANT guy. I’ve been using ANT since 1999. It’s amazingly powerful, amazingly useful and very flexible… and I’ve never once written my own ANT task. Just using the ANT tasks available or out there in the community I have been able to build hundreds of software projects.

Now, of course, everyone says, “Go Maven”. So I went Maven… and it sucked.

See, I have very strong feelings about Frameworks. (Not to confuse you here, but Maven, like ANT, is a Build Framework.) The problem with 99% of all Frameworks out there is that they force you into a specific way of doing something. “But that’s the whole point,” you scream at me. And I agree, that’s the point… until the moment you need to do something else.

Now, I use frameworks all the time in my software development, we all do. There’s one listed over on the right side in my links section that I actually endorse. So am I not hypocritical for deriding frameworks in one breathe while using them in another? Of course I am, but here’s where i’ll caveat it… I use frameworks that provide the least limitation on me doing new things. Maven, as an example is a rigid framework. Doing something new with it is difficult challenge. ANT on the other hand, while limiting, isn’t nearly as limited.

So here’s AREI’s Framework Measurement Testing System. First, evaluate a framework for the project you are working one, say building a Java Swing application. Next, evaluate the framework for another, completely different project you might work on in the future, say building a Website. How does the framework stack up for both things? Finally, consider what’s going to happen when you need to take the framework beyond it’s scope into new places. How accepting of that path is it?

Here’s some examples:

I had an ANT script that would append together a bunch of .JS files and then minify the entire set. Worked like a champ. The project I was on decided let’s give Maven a try instead of ANT. So I had to come up with a way to do the same thing in Maven. Two weeks of work later I ended up just calling ANT from inside of Maven.

Anyway, this post was really meant as just a simple link to an awesome blog posting that unleashes some much needed fury on Maven. But I got a little carried away in the intro.

So in summation, Maven sucks. Pick a framework that you can change. Damn the man! Save Empire!

permalink: https://www.arei.net/archives/193
Google Wave

Google announced yesterday a new offering called Google Wave.  There is an excellent article at TechChrunch that gives a complete overview.  All I can say is: THIS IS HUGE PEOPLE. The concept of Google Wave, the underlying idea is exactly the right step that the web and the internet needs to take. It’s a combination of Email, Instant Messaging, Twitter, Transparency, Collaboration, Wiki, Media sharing, and so much more.  Oh, and it’s an Open API and Open Source to boot.

If you’ve ever talked tech with me in the last three years and we’ve had the discussion about what is next in technology, then we’ve had a very similar discussion about the concepts behind Google Wave.  I’m not claiming Google has once again stolen my idea, but what I will claim is that there clearly is a need for this type of product that I saw and google saw and others saw as well.

Two things in particular I want to call your attention to that make Google Wave a huge idea:

1). Adhoc groups… What adhoc grouping really does for you is to let people create internet groups as easily as they create real world groups.  Think of it this way… when you walk into your work place break room and two other people walk in and the three of you start talking, you have created a group.  It’s fast in the real world, why can it not be like that in the internet world? For most of the internet when you want to form a group and start working together there’s a fairly large investment in creating the meta systems the group needs: setting up a mailing list, setting up forums, setting up a media store, setting up a user database, etc etc.  It’s a pain in the ass really, and it make setting up a group fairly non-trivial.  To some degree, Yahoo Groups and Google Groups automates a lot of that, but then you still have to go through the effort of getting people to join etc.

What Google Wave does is to make group creation simple and almost instantaneous.  Basically, you create a group by dragging a bunch of contacts together and off you go.  You can immediately begin discussion, expand the group, whatever.  Additionally, if you need other tools for that group like a map or a document or some other thing, you can add a widget or a robot to participate in that  group and boom, you have more functionality to fill the groups need.

2) The second feature that is huge for me is transparency.  Transparency is the notion behind Twitter in that you broadcast snippets of your activities out to the world and everyone can see what you are doing.  This is the beginning of transparency though.  To take it further you need to automate transparency such that as you do things, they automatically are published.  Of course, this brings up privacy concerns, but I think there are simple ways to solve this.

I know that Wave has some level of transparency, but it’s still to early to tell how much.  I suspect that even if this is an opt in version of transparency, like Twitter, that soon there will be robots and widgets (the extensions to Wave) that will automate a lot of this functionality.

The point is, though, that transparecny can be a huge social tool… but more importantly it could be a huge business tool.  And the company that sees this is going to get a huge edge over the company that does not.

Anyways, that’s my thoughts on Google Wave.  You should efinately check it out.

permalink: https://www.arei.net/archives/122