The Vancouver Sun Run is one of OpenRoad’s longest-standing employee events: we’ve been running it since 1999! This Sunday will be our 16th year participating in the popular 10-kilometre
Since the acquisition of design agency Mod7 in 2013, our T-shirt logos received a big upgrade. We revealed our first ever designer-made logo last year, which our runners wore proudly. This year is our second iteration of the logo, a minor refresh with the similar 90’s retro concept. Pretty sick.
Unfortunately, the printing restrictions only allow for black and white logos for these T-shirts. So we decided to play around with it, just for fun. A bunch of us got together and jammed on a couple of colourful ideas to push its retro spin just a bit further:
And by popular demand in the studio, a rad wallpaper was born:
Enjoy. See you on the pavement this Sunday.
Daryl Claudio is an Art Director at OpenRoad. He loves custom Photoshop keyboard shortcuts, locally roasted fair trade coffee, and Bryan’s hair in that photo up there.
A talented, multidisciplinary designer, Kaitie plays a pivotal role in maintaining the quality of design in all areas of ThoughtFarmer—from product design, to marketing, to helping our customers bring their individual intranet brands to life. She takes pride in creating solutions that improve the day-to-day work of employees, whether the result is obvious or subtle. With a keen eye for detail and an empathetic approach, Kaitie collaborates with the product development team to create elegant, functional touch points along the user journey.
Kaitie studied graphic design at Langara College, held several in-house design positions, and later worked for Langara assisting the coordination of the Continuing Studies design programs. Her work has been featured in HOW Magazine. Before she walked through our front door, Kaitie was a designer at Cedar Made Design where she worked with clients like Royal Columbian Hospital Foundation, 6S Marketing, TransLink, David Suzuki Foundation and the University of British Columbia.
In her down time, Kaitie can be found hiking the trails of North Vancouver, running along the seawall, or climbing rocks, often with her adorable deaf dog, Finn, loving life right along side her.
The topic of hiring is frequently discussed in the tech community. It’s a seller’s market, where talent is in top demand. It’s also a position of great responsibility where someone who’s mediocre can become an albatross on the team’s neck. Stories are common about days-long interviews with multiple rounds meeting different groups of people.
When I came to work at OpenRoad, my interview was nine months long, and involved a major project for one of our biggest clients.
Fortunately, it was paid work, and it was ideal from a hiring perspective: OpenRoad had an extended period in which to evaluate me: they received real code from me, they interacted with me as part of a team, and they saw how I handled deadlines and project pressures. When they hired me, I needed no ramp-up. On my side, I got to experience real working conditions and my potential colleagues at OpenRoad.
Not all businesses have the wherewithal to hire someone as a contractor for half a year to evaluate them.
Unlike other fields where there are more obvious markers of professional credibility—cases won for lawyers, or papers published for academics—we still don’t know how to judge developers aside from working with them and seeing what they deliver. The discussion in the tech community about how to hire is really a question about how to give the process of hiring some certainty that seems particularly lacking for our profession.
The Age Of Competency Tests
In the ‘90s, formal proficiency tests and certifications tried to fill the role of badges of professional ability. If a candidate for a Java development role wasn’t also a Sun Certified Java Professional, or a network engineer also a Cisco Certified Technician, then the same vendors were happy to sell exams that could be administered as part of the interview. These were multi-page affairs that took half a day or more, and they were high stakes—the potential employer had paid a lot of money for the test, and was biased towards treating it as distilled credibility. You could lose marks because you didn’t remember an obscure corner case that never occurs in practice.
Even then, a candidate could be a lemon. Answering a lot of narrow questions correctly implies a broad grasp of the subject, but that’s not guaranteed and can even be misleading. In 2006, I earned my Certified Information Systems Security Professional (CISSP), a highly-regarded security certification that requires passing a six hour multiple choice exam covering ten different knowledge domains. Some of it is deeply technical, like cryptography algorithms; other parts are about crafting policies in a corporate environment, or secure development processes. Typically a quarter to a third of the test takers fail, so my employer paid to fly employees to its headquarters for a week long boot camp before the exam.
What did I learn from that boot camp? That a secure fence is eight feet tall. That was an actual question on my test: how high is a secure fence? Twelve feet, another choice, is the wrong answer. The test was full of questions like that, where the right answer had to be given to you. There was a rationale behind the right answer, in this case that eight feet with triple strand razor wire was considered sufficient to deter casual intruders, while six feet without razor wire did not (and twelve feet was unnecessarily expensive, when eight feet would do). Knowing the rationale in order to reason to the correct answer (which seems much more valuable an expertise for a consultant) was actively punished by the multiple choice format. We were told outright that any answer reading “the support of upper management” was always correct, because the test designers held as axiomatic that upper management’s full support was the foundation of all security.
Replace Competency Tests With… FizzBuzz?
Exhaustive tests have not proven to be reliable indicators of professional ability. However, there still has to be a way to filter out people in the interviewing phase who are simply unable to do the job. For software development roles, we use a common test called FizzBuzz. It’s a smoke test, a basic competency check that, if you’re unable to pass, really should rule you out. It performs the filtering function that certifications and professional exams do, at much less cost in both time and money, without signalling a false positive of general ability if the candidate aces it.
The test is this: write a program that prints the numbers from 1 to 100, except that when the number is a multiple of 3, print “fizz”; if a multiple of 5, print “buzz”; if a multiple of both 3 and 5, print “fizzbuzz”.
I actually flunked this the first time I tried it. We were discussing it, I blurted out some pseudocode on the whiteboard, and my colleague pointed out that I wasn’t supposed to print the number if I printed “fizz” or “buzz”. Whoops.
Tech interviewing in the early ‘aughts shifted from “test that they know this” to “test how they think”. One vein of that was riddle interviewing, exemplified by Google’s process, where a bunch of engineers would ask you to figure out the most efficient way to move a mountain one inch to the south, and then judge you on how you worked the problem. This method has its own issues, like cram sites full of questions like this, clever answers included. More seriously, as we’ve started to broadly understand the sociological context of interviewing and its effect on candidates and interviewers, the deeper problem becomes evident: if I ask you to judge someone’s thinking, you’ll tend to be biased towards thinking like yours. At least the professional exams had objective answers like “eight feet”. I’d hate to think I blew an interview because the interviewer disliked my white-boarding.
A better direction, in my opinion, is towards simpler tests that are obvious “no-hire’s” if the candidate just flat out fails them, but offering a chance to pass if their work was interesting, all while being relatively low stakes. FizzBuzz does nicely at that: it takes 10 minutes at most, 20 if you want to explore it to see if optimizations are available or to discuss intricacies of the language, but it’s still a small part of a larger interview process. More importantly, my failure at FizzBuzz wasn’t an inability to program, it was rushing through the specification. “He can code but needs to be more careful on requirements” is good data for a candidate who otherwise performs well in the interview. Recognizing weaknesses is as valuable as verifying strengths.
But how do you do that for roles that aren’t about programming?
FizzBuzz For Non-Programmers: The Smashficiator
We were hiring for a Quality Assurance role, initially for manual testing but also for an experienced QA person who could build up our quality assurance practices at OpenRoad. How do you do basic, fast competency testing for a QA candidate?
You build a buggy app. We called it the OpenRoad Dualie Addend Smashificator.
The idea came like most good ones do, as an offhand comment in a meeting. The president joked that we should build a poor calculator app and see if the candidate could find the bugs. We kept coming back to that idea: why not build a simple app in a web page, give a brief requirements list, and give the candidate 30 minutes with it?
So I set aside an afternoon and built a single page app that took two numbers and added them. It had two text inputs, plus and minus buttons for each, and a button to trigger calculation. There were two pages for it: the first had requirements and candidate instructions, the second was the app itself.
As a programming exercise, it’s interesting to try to build something buggy. If something works that shouldn’t, you’ve got a bug, but it’s also a requirement if it doesn’t work as it shouldn’t—it can’t not work in a way it’s not supposed to not work. Good times.
Our candidate instructions were these:
- Read the requirements
- Create a test plan
- Launch the app and execute the test plan
- File bugs in JIRA, our issue tracker
Here were the requirements:
- The adder is a self-contained web page
- The adder has two inputs that accept numeric values only
- The range of acceptable inputs is 1 through 10, inclusive
- Input is by keyboard or by increment/decrement buttons
- On valid input to both text boxes and pressing the calculate button, the total is displayed
- The design is responsive
Our app had many deliberate flaws in it, but we were careful to vary the flaws. Some were deviations from the requirements, such as auto-calculating a total after entering a second number, rather than waiting for the button press. There were math errors—if the second input had a 1, it was treated as a zero. There were boundary errors: you couldn’t enter a number outside of 1 to 10, but if you used the increment/decrement buttons, you could go straight through those bounds. And there were ambiguous cases: if you used the buttons to get inputs outside the acceptable ranges, the addition was still correct—should that fail or succeed?
Promising candidates did a few good things immediately. Of the 30 minutes given, they would spend the first half creating a simple test plan that covered a good breadth of issues, given the short time, and usually did so with simple one-liners that were fine for shaping a limited amount of QA attention (like “test entering values outside the acceptable ranges in both inputs”, and “verify that addition is correct in all cases of valid inputs”). When they found issues, they stopped and created an issue ticket that described the bug and the circumstances, then resumed their test plan.
Afterwards, we would spend around an hour evaluating the exercise. We would ask them to prioritize the tickets to see if they gave the math errors a higher priority than the red border that didn’t disappear. If they didn’t finish their test plan, it wasn’t a problem unless they got stuck on a particular bug that ate all their time. All QA involves cost-benefit analysis, and given a short time, it’s better to have a more thorough survey of many bugs than full characterization of just one.
Within a limited scope, we got a demonstration of a candidate’s abilities exceeding a reasonable baseline, without creating a pass/fail moment in a long process. We also got a demonstration that it was a limited test—there’s no way someone could fully evaluate the toy case we presented in half an hour, but if they handled their limited time well, it gave us a demonstration that the candidate could prioritize adequately.
Hiring for Real-World Skills
Many jobs have implicit FizzBuzz tests. Our design department includes, as part of its interviewing process, working the pipeline: create this and then hand it off to someone else—if they can’t find that someone else in an office of forty people after they’ve already been introduced, that’s a bit of a red flag.
The important thing about FizzBuzz tests is to remember that all tests are limited in what they can tell you, so smaller and more varied tests tell you more while decreasing the odds of being misled by any one of them. Be conscious of the FizzBuzzes in the process, and if you can, build your own: a narrow, contrived test that’s a good fit with the actual duties can be cheap to make, simple to execute, and tell you a lot more than you might think if it’s treated as a data gathering moment, rather than a gateway.
And if you get a chance to build something called “the Smashificator”, grab it.
Justin Johnson is a senior software developer at OpenRoad. He plays a lead role in our hiring process for developers and QA engineers, including building the Smashificator.
We’re pleased to announce that the responsive website we designed and built for CBC/Radio-Canada Transmission has won Best Broadband Website in the 2014 Davey Awards! Congratulations to our talented team of project managers, designers, and developers, and a special “thank you” to our client, the Canadian Broadcasting Corporation.
The CBC Transmission website drives revenue by enabling potential clients to easily find out more about CBC/Radio-Canada Transmission. Whether they’re in the office or out in the field, potential clients can use the new website to easily learn about specific CBC towers and service offerings.
Got a mobile app? Test it on your toddler first.
I know you’re not really supposed to give your toddler an iPhone or iPad, but, really—it can’t be helped sometimes.
Before I had a kid, I’d judge parents who would give their child an iPhone. But now that I’m a parent, I totally understand. Parenting is hard, and parents are usually exhausted. Sometimes it’s the only way your toddler will stand in that Santa lineup for 20 whole minutes.
If you’re a parent and you haven’t given your child your smartphone or tablet, that’s great! I envy your discipline. The American Academy of Pediatrics suggests that screen time should be avoided for children under the age of 2. I agree with them. My toddler only gets my iPhone occasionally, not all the time, everyday.
But for the purposes of this post, let’s assume it’s more than likely that your toddler is going to get their hands on your mobile device sooner or later. (more…)
James Young has more than 10 years of experience in software development and quality assurance. Before coming to OpenRoad in 2014, James worked with BC Housing where he was responsible for writing and executing complex test cases and verifying the functionality of its web-based applications. Previously, he consulted on Siebel CRM implementations for clients such as Marriott International and DishNetwork. At OpenRoad, he has tested web applications and worked with the development teams for Clicklaw, TI Corp and Pokemon projects.
James has a Bachelor and a Master’s Degree in Computer Engineering from the University of British Columbia. Outside of work, he can be seen playing basketball and badminton or working out in the gym.
Jessica is a Certified Project Management Professional (PMP) with diverse experience as a project manager and project coach. As Director of Project Services, Jessica spearheads all areas of OpenRoad’s project management, bringing an eye for the big picture and the ability to balance a project’s technical and creative needs to meet business objectives.
During her career, she’s managed CMS portal implementations, rebrands, online video games, software implementation, and website builds. She’s worked closely with both small businesses and Fortune 500 companies like Coca-Cola UK and Pfizer, along with the U.S. Department of Energy, to name just a few.
In her spare time, you’ll find Jessica outside: climbing boulders, snowboarding, and riding bikes.
The UX community in Vancouver creates some of the best-designed experiences in the world. As founding members of VanUE (the Vancouver User Experience Group), we were thrilled to co-present the inaugural Vancouver User Experience Awards on November 26, 2014.
Back in 2003, a handful of user experience practitioners met, hoping to find a way to connect Vancouver’s burgeoning UX community. Since then, the resulting organization—VanUE—has grown organically over the past 11 years to over 1400 Meetup members today, with a great lineup of monthly UX events.
This year, for the first time, we set out to celebrate and recognize the great work being done in our own city. (more…)
My name is Dave Kachman and I have an iPhone 4. I’ve never met Siri. I type in a 4-digit password instead of scanning my thumb. I have never experienced LTE.
Most times, I despise the “spinny”.
|Figure 1 – Animated GIF image of classic “spinny”|
The “spinny” is an animated GIF image that is commonly used to indicate when a web application is loading something in the background. As users of the web, we started to see our beloved “spinny” in many websites when AJAX was introduced (which allows websites to asynchronously take actions without reloading the entire page). These actions could sometimes take a fair bit of time, so there was a need to inform users that the site was doing something in the background.
This is all well and good, but only if the wait time is reasonable. As I have witnessed over the last couple of years, adding a “spinny” whenever AJAX is used is not enough for all users. This is especially important for those not using the latest and greatest technology or those in areas with spotty network coverage.
The idea of “reasonable wait times” is not new. Jakob Nielsen posted about reasonable wait times in his article from 1993. He notes that the human attention span drifts after about a second of waiting, which means progress feedback must be given to the user if they must wait longer than 1 second to finish. The human attention span begins to drift again with delays longer than 10 seconds, after which Nielsen recommends updating users more frequently with updates on how the task is going. (more…)
We were thrilled to host an Open Studio event on September 20, 2014 as part of the inaugural Vancouver Design Week. A city-wide event, Open Studios connected the many Vancouver design studios and industries, with 34 different studios opening their doors on a sunny Saturday afternoon.
Designers from all disciplines—as well as the general public—had the opportunity to engage with the many spaces, processes, and people that form our vibrant community. By exploring our studio and seeing our projects throughout their various stages of completion, guests of OpenRoad got a rare glimpse into how we work with our clients to produce unique results through strategy, design, and development. Plus, there was great local food, beer, and wine on hand.
It was a great opportunity to see old friends, meet new faces, and talk design. A huge thank you to Vancouver Design Week’s organizers, our volunteers, and to everyone that came out!
Please take a peek at our gallery and hopefully you can join us next year.