Wednesday, January 18, 2017

Fast Food UX

As appealing as it is from a potential cost-saving perspective, everybody and their grandmother will tell you that UX can't be offshored. Most say it out of common sense, but common sense isn't always so common. For five years, in various capacities working with medium-sized software development houses with an offshore component, I witnessed some of the most bizarre examples of what people are calling "UX". Read on to understand my particularly intimate experience trying to commoditize what can't be commoditized.

As DevOps eclipses various delivery models, I've had to question how much the rise of UX has influenced this change in how we staff up for the delivery of software and technology services. I've seen it time and again; tech leadership hears the buzzword of UX and attempts to shoehorn a UX practice into an existing delivery model; in some cases the delivery model has a sizable offshore component. The attempt is made and begins to break down; fingers get pointed and processes are reinforced, but the focus on people and co-located teams is never addressed because it's too much to reinvent a delivery model "just for UX". After all, software is the core deliverable, right? UX is just a component, a value-add. Wrong.

But let's backtrack for a second and define (once again) what UX is. Because the perception that UX can be outsourced comes from a confusion in definition, understandably. The rise of sonic, haptic, and multi-sensory input technology, replacing what has traditionally been done through a screen-based UI, is also disrupting what was considered the makeup of a "UX team" even a year ago.

The myth of UX outsourcing comes from a validated belief that graphic design can be outsourced; that wireframes, prototypes, and other spot conceptual efforts can be outsourced; that front-end development can be outsourced; and that brand strategy, marketing, collateral design, industrial design and even behavioral research are afterthoughts to the more core endeavor of software development and the SDLC. This focus on piecemeal, rather than holistic UX sorely lacking in a research component, is what I've come to know as "#fastfoodUX". And as per the connotation, it's a cheap facsimile of its genuine counterpart, a false messiah to the mid-market consumer seeking to build Google for $15,000 and front row tickets to the Astros game. This misnomer of the term UX continues to permeate professional culture to the point where many of us, beaten and bruised by the ignorance and arrogance of the tech community, are throwing up our hands and claiming the battle for definition lost. The fact that UX covers a much broader range of activities that define "user centered" design is what trips up companies where the service offerings are far less holistic; as they focus in on their core strengths.

The jury's still out on forming a dedicated offshore UX team as a piecemeal component of a larger core service offering. But this judge says it's not possible, because UX is high-touch, user-oriented strategic thinking, and this sort of thinking is needed everywhere. Your dev shop is a UX company. Your marketing agency is a UX company. Your product design firm is a UX company.  Your research consultancy is a UX company. All of these and more are connected, and to say any one is outside of the sphere of a company's mission will leave a UX team poorly utilized. This is where the (more conceptual than physical) structure of a localized turnkey think tank becomes critical to utilizing UX effectively. And you can have that with coworking contractors, and you can do that with or without an office space as long as you're in or close to your client's time zone. It's the 12 hour turnaround time and the language/cultural issues surrounding offshore team members that is contradictory to the DevOps model, and to the high-touch nature of UX research, development and design. In my experience, not even the tightest onshore/offshore hybrid delivery process can ameliorate this issue. UX  thinking isn't expressed through concise, piecemeal directives by which offshore teams most effectively engage with their stateside counterparts. It's highly interactive, and highly conversational. UX done effectively is a real-time exercise between stakeholders, production teams, and end-users. 

But what do I know? I'm only someone who has effectively employed instant messaging, corporate social media, concept sharing platforms like InVision and other long-distance communication systems to help ease the gap between stateside and offshore teams. I'm just a guy who did everything he could to try and achieve the high-touch, agile feedback loop that was needed to attempt research-based user-centered design, not just presentation level skinning, with my own internal team and with clients. All the while I was attempting to be the singular stateside client liaison, often waiting for days to get the sort of results I needed and that clients expected. In these waiting periods, things were forgotten, things were misunderstood, and another 12 hours would go by to correct them, if we were lucky. The most miserable realization for me, however, was that budgeted effort estimates would always be too tight for the bar I had set for quality. That the expectation of cost-savings that an offshore model offered was "cramping our style". I inevitably had to come back at a stateside rate to get the projects back on track, and therein I would be slammed for dedicating more time, effort and energy than we had budgeted for the client.

UX as a per-capita expense is higher than that of development.  "Cross-functional" and "co-located". These are two terms that are an anti-pattern to siloed software development. And yet they are two of the most important aspects of successful UX delivery. A proper UX team exists in a collaborative culture with the tools they need to do their job. Ideally, researchers and designers are bouncing ideas off of each other in an open collaboration space, even if virtual, at least in the same time-zone. At the heart of their conversation is the product experience, from entry to exit. They whiteboard, they test with end-users, they communicate fluidly between each other and in reasonable transparency with the client. After the conceptual efforts are complete, a UX lead ideally remains present through the development process to oversee the nuances of design that are being executed on the presentation layer by front-end developers.

Tech companies trying to incorporate UX into their offerings sometimes can't get past this initial hurdle. Though it may seem like common sense, to keenly identify the value proposition of design culture as it relates to technology solutions offerings, and thereafter build a model to justify this expense, is rare. In my experience wrangling together a UX team, I've found that the visual designer's hardware needs, specifically, require higher quality screens and more hard disk space for files; this is in turn spilling over into development space as we share these files among each other. I've tried almost all the cloud-based file sharing software worth considering and we've settled on Dropbox as the most robust for cross-platform sharing. That issue resolved, there was the issue of screen quality; designers require high pixel-density monitors, preferably running dual monitor systems. And finally, third party software tools that are Mac-only make handling a PC-based offshore team that much more challenging as we must keep up to date with an evolving design toolset. A huge reliance on Adobe's cross-platform Creative Suite had us at a disadvantage to localized teams who, though more demanding in their initial hardware and software costs, are now producing scalable, vector design work in Sketch.

*SIGH*. I've seen components of UX outsourced successfully, but never this whole that I speak of. But you motherf**king cheap-asses will keep trying. There are the Fiverrs and Elance/Upworks of the world, daily encroaching on the value proposition of the design agency and its overhead; the software development house and its overhead, the industrial design firm and its overhead. In these models, "UX" design is commoditized successfully because client requirements are simple and devoid of a need for serious, holistic strategy. Clients who are seeking a logo or homepage layout from such entities don't fully understand the value of branding, of information architecture and wireframing, of product design, and even if they did, wouldn't have the budget to pursue the effort to materialize an effective strategy for any of it; i.e., the stuff that's hard; the hallmark of a professional organization's individuality and caliber. From a marketing perspective, cutting through the noise in a world that has twice as many people as when I was growing up is hard. From a software UX perspective, developing a digital solution with a presentation layer UI that is usable out of the gate, is hard. From an industrial design position, the sheer effort of meeting form with function in a hardware product is hard. All of these, additionally, are exercises covering several disciplines and skills in the fields of brand storytelling, visual design, psychology and software/hardware development consistently.

So did scruffy, hipster designers influence the DevOps revolution? It may never be admitted by the software engineering community, doomed to rot away in the annals of conspiracy theories along with the faking of the moon landing and the JFK assassination. Regardless, technology clients will inevitably come to solutions firms with needs that cross the spheres of marketing and branding, software development, and possibly even industrial design if there is a hardware component to their vision, such as a wearable device or an IoT project. At this time, very few firms are poised to call themselves a "partner" to such clients, with leadership lacking the holisticism to be truly turnkey. Those that are will embrace this "singularity"; this convergence of disciplines. They will have stateside DevOps teams working with a UX component that exists from concept to finish and beyond in the product development lifecycle.

Thursday, May 5, 2016

The Tablet Computing Paradigm

A few weeks ago I purchased a Surface Pro 4. I returned it relatively disappointed. I really wanted to like this product. Now I've seen articles and Reddit posts discussing the superiority of the Surface Pro to the iPad. Quite frankly I have no idea why anyone would think this when it comes to, specifically, the concept of tablet computing; what it should or could be, and how we approach it from a UX point-of-view.

I've outline the reasons for my dissatisfaction below. Obviously you can't blame MS for everything here:

  • I prefer interoperability with iMessage across devices. For better or worse I'm trapped in the Apple ecosystem for this reason, though Android has had such interoperability features even before Apple did. MS has Skype, which is bloated and far from elegant. Attempting to plunk down cash and associate my phone number with send/receive features failed miserably.
  • iPad has considerably better battery life; objectively speaking, the user experience stops when the power does, so friendly warnings to plug in aside, the race for consistency of presence is being won by Apple and will be until wireless charging becomes ubiquitous and renders the issue moot. I don't even know if I'll be alive by then.
  • Windows apps are weak. I was surprised that even essential tablet apps like Facebook, eBay and Amazon were all mediocre compared to their iPad counterparts. I ended up using the websites for each of these services instead.
  • Surface defaults to Flash for video, and when turning Flash off in the Edge browser, shows a broken link icon instead of alternative video on most sites.
  • The Edge browser remains the best browser to use on Surface, but lacks touch support. To go back, for instance, one still cannot hold down the back button and get a list of the last few sites visited.
  • UX falls apart for touch as Windows software is generally mouse/keyboard optimized, some are also blurry due to the high-density display. Most Surface tablet apps are fine, but I ended up inevitably venturing into the desktop app space and it was a joke at best. I've often complained about the MacBook not having a touchscreen. But I realize now that it opens up a plethora of UX challenges. I think a limited touch screen system is still justified for MacBooks. Simply being able to scroll a page up and down with a finger would be great.
  • The Surface onscreen keyboard doesn't always appear when entering a text field, and doesn't have the microphone button for Cortana for quick voice input. Why invest in a voice recognition system and then make it inaccessible? I found myself exploring the voice recognition software built into Windows, which is clunky and requires a lengthy training session to start using.
  • I'm a huge gamer, and gaming was definitely a selling point for me. In fact, gaming is what I thought would redeem the Surface due to it's greater processing power. Touch gaming for serious PC games (XCom for instance) was fun, but not enough touch support gaming content is out there. I do also own an XBox and love it, but the XBox/Win 10 streaming feature is a novelty that I have yet to see the value of. The XBox SmartGlass app for iOS is absolutely outstanding and more than sufficient for my needs.
  • At $1300 plus tax (I purchased the 256 / i5 8 gig version), the price was too high to justify a tablet comparison. Or so said the MS sales reps. Microsoft marketing aims to compare the Surface to a Macbook, and for good reason, the touchscreen tablet experience is mediocre and feels very much like an afterthought. The tablet experience is clearly perceived as a "value-add".
  • Because of this implicit mentality of "desktop first" endemic to the product design, I started my experience without a keyboard but in the end, had to drop $100 for a "type cover" keyboard that only covered half of the product. As a laptop, the Surface also falls flat despite access to the file system and other thing that force me to revert back to traditional mouse/trackpad + keyboard computing. There simple wasn't anything that I couldn't do on the MacBook, and with generally fewer problems.

In the end, I'm emboldened to say that there is no Microsoft tablet. There's a touchscreen laptop and the tablet features are half-baked. One might say this is the downfall of the hybrid platform, though not endemic and not unfixable. The problem exists in the user experience, in software. Proper focus on the tablet paradigm could easily change this situation around. But MS seems to still need convincing on the value proposition of a pure tablet experience will little to no accessories to support it.

It's interesting to see that the tablet space isn't being taken as seriously as I feel it should. Rather than letting it fade away, for me it raises the question: "How do we make the most out of the tablet computing experience?"

Hint: It's unique. In the early days of the iPad, even Apple made the mistake of conceptualizing the application UX as a larger version of a phone app. There could be no greater misconception. The tablet experience is deeply ingrained in reading and casual, low-input media consumption lifestyle. At least up until now. touch simply doesn't allow for data entry at the speed and accuracy of the keyboard/mouse combination. But I don't believe the potential of what we can achieve using touch, exclusively, has been fully explored.

Wednesday, July 2, 2014

Apple Glass? Never.

One of tech's greatest progenitors, science fiction, often paints a picture of inelegance: Heavily modified humans with cybernetic implants; exoskeletons cradling soldiers as extensions of their own bodies; interfaces suspended in mid-air requiring histrionic arm motions to navigate while emanating the light-blue glow of a projection TV at night. My research in sci-fi design inspired by the Star Trek Verizon sponsorship in '09 made it obvious to me that at least one school of future design had a very formulaic look driven by the visual inspiration of circuitry and other electronic imagery.

It's to this very small niche—those who allow technology to cross into their personal aesthetic—that I feel wearables like Google Glass will be acceptable. It's also why, when it comes to ubiquity and universal adoption of a technological product, Apple remains the leader.

"You want me to make a donation to the coastguard youth auxiliary!"
Image copyright © 1985 Universal Studios/Amblin Entertainment
Wearables are here to stay, portending of a near future of cybernetic implants affecting our perception of the world around us. In this context, we have to discuss the subject of hardware and software failure, be it on, or even in the body. This need not be in actuality. Only a potential possibility is enough to raise discomfort towards adoption, and to ruin the user experience of those who have taken the plunge. The primordial master-slave relationship of wielding a tool; be it the reigns of a horse, a sword, a hammer, or in today's context, a supercomputer in our pocket; came baked into our subconscious with a presumption: We are in control. Though we may grow increasingly dependent on the held item to the point where it clearly has control over us, the illusion of control remains because it is in the hand.

Now, Frodo Baggins, put on the ring you once held. Move that same dependency to an object you wear, and at the very least symbolically, we relinquish control. In the case of Google Glass, an item you put not only on but in front of your face, we're looking at a degree of intimacy breached that is a tad more pervasive; a tad more demanding of an argument for adoption. This is key in understanding why certain future attempts at wearable technology will succeed in the mainstream while others will fail.

With the wearables revolution well under way, the iWatch cometh. Over the years I've heard people accuse Apple of making high-tech jewelry, style trumping substance in the classic case of artsy dreamers taking the helm. Job's relationship with Buddhism is often cited as a part of this predilection towards design harmony. In fact you don't have to go too deep to understand this formula. It's this very quality of jewelry being surface, superficial and even in some cases discardable, that allow for the ubiquity of adoption that the iPhone, iPad and future iWatch capitalize upon. Though none of these items command a discardable price-point, they are treated with as much acceptance. How do they do this?

The history of the human race gives us enough foundation to understand the simple fact that cosmetic items such as rings, bracelets and necklaces are an acceptable degree of intimacy for just about anyone. They are also much older and universal than anything evolved out of the information age. At their core, these items enhance our attractiveness, often reflecting our perception of self-worth. This —not texting, playing a game, blogging, or talking to someone at a distance—is the core value of jewelry. The exploration of this phenomenon is called "fashion," and as we think of "fashion," we draw images of a world that is very real and yet, very eccentric and impractical at times. Almost the antithesis of the tech world. To this day, no company harmonizes fashion and function like Apple.

The next level of intimacy are glasses and other objects with functional utility, which human beings justify to include on their person for this fact, but are theoretically more cumbersome due to having to satisfy two, rather than just one set of criteria: fashion and utility. History has also shown us that glasses, even without the functional aspects of a computer, are cumbersome and exhaust the value of having them on for any extended length of time. Though the utility may increase with higher functionality, I believe the cost of inclusion on one's person remains too high for Google Glass to become a staple of the average person's lifestyle, unless they themselves require prescription lenses.

In a roundabout way, I've illustrated a three-part trade-off: fashion, vs. utility vs. the burden of indulgence. Google Glass tells me a lot about Google's weaknesses in failing to understand this three-part dynamic. It tells me that Google doesn't pursue or understand fashion nearly as effectively as Apple does, and that Apple's strength in hardware design remains unwavering despite its weaknesses in software (I finally installed Better Touch Tool for simple split-window management and quick desktop access a la Windows). There's an impracticality to Google Glass that is by it's very nature representative of the company's core drivers of data and information, while at least the desktop user experience belonged to Microsoft until lately.

Fast forward to 2012 for an example that demonstrates imbalance perfectly. The "bigger the better" phone wars are on, and Google/Samsung bring us the Galaxy Nexus, the pure Google phone. I switched to Android after tiring of Apple's restrictive ecosystem. I never felt the Gnex was a good fit for the hand. The battery life was atrocious. NFC failed during the one time I tried to use it in an Atlanta gas station. (Granted, I don't see the American South as a high-tech bastion by any means and would probably have used it more if I were in Austin or the Valley.)

Gnex was far from elegant or even functional hardware, but I loved universally texting anyone from any device thanks to Google Voice, and having my wifi passwords maintained in the cloud. To this day I believe Google voice is superior in its recognition capacity to Siri, and I won't even speak of Google Maps in comparison to Apple's. Google remains my model of how to use data together with the cloud to provide a great user experience in software. iOS could definitely learn from Android. But Google, in partnership with Samsung and other hardware vendors for the one-size-fits-all Android OS, also remains my model of what not to do with hardware, with Glass being a very clear extension of the Google mentality.

Google Glass is fun, a novelty, and portends of a dismal future where we're all walking around talking to ourselves with augmented reality glasses on. The burden of indulgence is relatively high in this case. Perhaps Apple's pursuit of a less-intrusive-than-glasses-but-more-intimate-than-a-phone "iWatch" is a more ubiquitous foray into the exact same direction as Glass.

Thursday, March 21, 2013

The UX Meditation

Honbo Garden in Osaka (CC license).
I can safely say I'm doing what is my core passion these days: UX. In some sense, it's my zen garden; a chance to say "stop" to manic, unmeditated design requests and the breakneck turnarounds found in the industry formerly known as advertising.

High visual design is often thought of as coming from an R/GA, Wieden Kennedy or one of the thousands of sexy, boutique digital agencies out there today. However, one of the problems I've had with Madison Avenue's approach to design for digital is a culture of rock stars and one-off, "campaigny" executions that, though they have their place, don't really do it for me. There seems to be a problem accommodating scale given the culture and expertise found in such environments. And the last thing I want to work on is another glorified digital rubber chicken, hand buzzer, whoopie cushion, or a fake fly in a fake ice cube that thrills for a quarter and is never seen again. Worse yet, I'd dread working on an enterprise-level site that is treated more like a small campaign microsite.

I like to build products that last; ecosystems that one can nurture and grow into something used on a massive scale on a daily basis. After several years of seeing it done wrong at some of the largest interactive shops, I decided to pursue design and creative direction under the newly-coined term "UX" as Lahiri Studios.

User experience is something I treat as an umbrella term under which we find user research, information architecture, prototyping and visual design. A UX professional, therein, is a more involved architectural and strategic thinker; someone who doesn't push pixels in Photoshop without stopping and thinking several levels deep into the implications of a design decision. I can't imagine it being done any other way these days, but I know it is, all the time, and I'm just thankful that it's not my world anymore.

The fact that Valley companies are pursuing design aggressively is the light at the end of the tunnel for me. I think Apple spearheaded this and left an indelible mark on the lives of design professionals in technology. It's led to a new level of respect in the industry for what is as much a deeply personal passion as it is a professional role now for the last 13 years. Companies continue to take notice of the importance of design, and the importance of Austin's explosive and extremely disruptive tech culture as well. I truly appreciate the methodical approach of tech as it has existed in software and systems development for decades now, and would love to see that same approach applied to design. I truly believe it's not just the right way, but the only way to design for digital media at scale.

Saturday, March 16, 2013

Taking the Leap

What are social networks good for again?

Rumors abound, and LinkedIn stalking is at an all time high, so I figured I'd write a complementary post to the last one. 

Five years in Atlanta was enough to build my career and complete 13 years in the creative business. Now I can safely say I have some relative freedom to define my career as I choose. 

Southby - Geeks being herded into the keynote presentation.
Moving to Texas and witnessing the Austin startup ecosystem has made me acutely aware of the power of thought leaders and what attracts (and repels) the intellectual class. I'm not speaking of the monetarily wealthy, the privileged or the famous, but the minds among us who are defining the future, and the environment needed for those minds to thrive. 

These conditions have done their part in encouraging me to lawyer up, incorporate and start my own consultation practice, pursuing my creative passions independently. I'm doing things my way, both out of choice and out of necessity, given the changing marketplace for creative services. In the back of my mind are the words of an industry associate, entrepreneur and branding genius Barry Deck: "Agencies are dead. Nobody wants another agency."

I must say, the design consultation model has changed considerably since I've started in this business. With Adobe switching to a subscription service, entry costs are lower than ever for creatives to at least learn the tools of the trade. With a degree of entrepreneurial panache, a good eye and a strong knowledge of the digital medium, the sky's the limit. Given that I work virtually and share coworking space with other entrepreneurs, I can't imagine working at a traditional office anymore. I work remotely from each of the four Texas metro locations depending on my client needs.

It truly is the new Wild West out here. Houston with energy and health care, Austin with it's focus in technology. Dallas is the biggest city in Texas with something for everyone; San Antonio is arguably the most beautiful of the four with an equally diversified economy. The unforgettable San Antonio River Walk harkens back to my days with Volvo and Dan Criscenti, my good friend and former boss.

158 people a day are making their exodus to the city of Austin alone, many from Silicon Valley. The result of the brain exchange couldn't be more obvious than Austin's recent approval for Google's new fiber optic Internet service. A host of mind-blowing Texas startups both within and outside of Austin are making huge changes in how we live and work in the 21st century. I even managed to find the perfect client here in Houston, closer to my family, and haven't looked back since. I'm working on the things I've always wanted, in a way I could never have imagined. 

All four burgeoning Texas metros are within three to six hours of each other and I consider all of them "my city;" a self-contained economy seeking remarkably little from it's neighbors, while contributing in excess to the world's economy and cultural landscape. 

That's about all there is, really, behind my going independent. On a personal level, family had been my initial reason for returning to Texas. But friends quickly followed. I realized the distinct feeling that I was caught up in something bigger, something with its own magnetism, almost immediately after arriving. Between my new found friends and family, I find it quite a blessing to call Texas my home.


Sunday, July 8, 2012

iOS X?


Macs have long been the designer and art director's productivity platform of choice, but I believe that's changing.

Apple has been shoving the ever-profitable content consumption model of OS down our throats for some time now; in certain cases at the cost of the content creation platform. New editions of OS X have awkward multi-screen swipe functionality borrowed from iOS. At first novel, one quickly grows frustrated with certain windows being associated with specific screens forcing the user to jump back and forth, and there remains the failed ideal of distancing the user from the file system that grows ever painful when trying to share file paths with coworkers. To this day there's no easy way to globally minimize all windows to reveal the desktop, and don't tell me about Option-Command-M, that's just for finder windows. These are issues to me because I'm a platform agnostic designer who has used both PC and Mac intimately. And I have little hope that these issues will be recognized let alone resolved by the OS X team in light of the myopic direction they're heading in.

Now, with the new MacBooks, we find a move to a sealed hardware "device" model not unlike the iPhone and iPad. Furthermore, the 17" monitor has been done away with. This is a move towards positioning the MacBook Pro as a portable consumption platform in the footsteps of the iPhone and iPad. This would be fine if we were dealing with a $500 device that checks email and browses the web, and if creativity was limited to the sophistication of what can be achieved in a casual entertainment app like Draw Something. But we're not. We're looking at an $1800+ piece of hardware that must perform on an enterprise level for serious content creators; namely, graphic designers!

Sophisticated content creation requires scalability, and not necessarily longer battery life, the justification for the sealed hardware approach. For instance, I don't know a single advanced Photoshop user who realized they didn't have enough RAM after a software upgrade or had to upgrade their machine in light of a major project that pushed the envelope in a way that wasn't initially expected. Anyone who's sat their watching Photoshop grinding away trying to open a large file on a system with only 4 gigs of RAM understands just how the creative process can be hampered due to hardware limitations. As far as battery life: Most of us are plugged in with a dual monitor setup unless on business overseas or presenting.

One could argue that Apple is aiming for a unified hardware standard like console game systems that allows for fewer hardware unknowns and hence better optimized software. I applaud this ideal if in fact it was part of their reasoning. But I don't see this happening for a very long time. As in the past, software has pushed the boundaries of hardware, and that trend hasn't let up for a number of reasons. So for this fact alone, I see new MacBook owners being considerably bottlenecked, where the cost of more RAM is now the cost of a brand new machine, and where the max RAM (8 gigs) one can have in the current selection of MacBooks may not be enough. Heck it's frustrating as it is to only have USB ports on the left side, or to have to listen to a startup sound that's been the same since the early 90s.

Has Apple forgotten the needs of creative professionals, historically passionate champions of the Macintosh platform, in pursuit of the more profitable consumer-level demographic? I believe so. We've all seen how Jobs treated Adobe and how everybody drank the Kool-Aid; and for those of us who understand and are deeply invested in the creative space, the disparagement of the venerable Adobe brand was painful to watch, and their adaptation to the situation just as exciting. Multimedia shops like my own the world over are adapting to the the shutting out of Flash, particularly on the mobile platforms, with varying degrees of success.

And now we're faced with a new challenge.

What's most scary about Apple's wonton push for the consumption platform is that with Microsoft flailing in the wind with Windows 8, there's nobody to competently fill the empty space it will leave for serious creatives seeking a serious creative workstation with the portability of a laptop. It appears that one's best bet if things get any worse will be *GASP* Windows 7, where I can copy a path and paste it into a messenger window in two simple steps, minimize all windows to see the desktop with a single click and vice-versa, maximize two associated windows to exactly half the screen, and paste a path into a finder window without resorting to Command-Shift-G and other multi-step acrobatics. And it goes without saying that I can add more RAM to most Windows laptops any time I want.

Saturday, February 11, 2012

Beyond 960

In the typical discourse of our evolving interactive design field, I hear a lot about the 960 grid. In a recent attempt to adhere to this framework with a very ambitious responsive web design project, I realized it fails to address a still very important space: the widescreen laptop and desktop, most often beyond the dimensions of 1024 pixels wide.

85 percent? That's like, the poor people, right?
I have, more often than not, seen the screen real estate beyond 960 treated as dead space with very little thought. Usually I see a solid color or a repeating background extended over it with the boundaries of the framework preventing any further utility. The 960.gs site itself makes little attempt to acknowledge this space, but is it that insignificant? Should we herald the dawn of the 1024x768 tablet dimension as the end-all of our screen-width and height optimization efforts?

In fact, the case is the exact opposite. At least for now: According to W3 Schools, the most popular screen widths are beyond 960, beyond 1024, and have been increasing steadily over the years with one exception: 2011, which saw the rise of tablet computing and specifically the iPad. However, according to the statistics in the above-cited link, 85 percent of the W3S demographic remains a considerable majority when deciding what our canvas dimensions are for any given interactive design project.

Within the interactive world, responsive design has leapt to the forefront of our passion. We eagerly go to websites like bostonglobe.com and scale in our browser windows to witness with awe the magic of responsive style-sheet substitution, at least for myself and my geeky colleagues at Brown Bag. An entire slew of responsive templates have arisen in a very short amount of time, and yet very few have given any thought to the idea of optimization, utility, and aesthetics outside of the 960 grid space.

Me gusta.
There's also something else affecting the premature abandonment of the widescreen dimension: the switch to the mobile-first design methodology. Now mind you, I'm a huge fan of this school of thought, usually attributed to Luke Wroblewski. In the long forgotten past (i.e., a year ago), before mobile-first, those of us working in the responsive space would come upon uncomfortable in-between states where elements of a given design simply wouldn't cooperate. There would be too much horizontal space for two columns, but not enough for a third, and this state used to be the tablet space. Things started with a desktop design, then a complementary mobile design was added, and the nascent tablet space had to deal with that wonky two-column spread that was too far apart.

As we move towards mobile-first design methodologies, the tablet is less of the uncomfortable in-between space. The choice is instead made to alienate the more popular widescreen desktop, creating large aspect ratio "gutters" on the left and right side reminiscent of 4:3 content on a widescreen, 16:9 TV. In an aesthetic sense, this is still a far better place to be. But unlike TV, we can actually do something with this extraneous area of often 300 pixels or more on each side. We're interactive designers and the web remains our creative, exploratory space. Where the tablet was our undiscovered country, the widescreen desktop and laptop space is our forgotten land, rich with untapped utility and creative opportunities.

Facebook brings ancillary utility to the widescreen gutter.
In fact, at least Facebook is utilizing the widescreen gutter the right way. Stretched out beyond 960, we find our friends' latest activity and an extension of the chat utility. One finds that the gutter is ideal for support content; i.e., things not necessary essential to the core experience and done away with when brought to the tablet and mobile level. Facebook, to this degree, has a pseudo-responsive nature that comes from treating itself like an application more so than a website following a responsive fad. It does what works, and explores ways in which to enhance the user experience for its specific audience.
 
And on that note, I would like to encourage getting off the responsive bandwagon purely as a cosmetic exercise that benefits developers by not having to create multiple sites. In the end it's about the user, and smart design is concerned with the much larger picture of enhancing the user experience. Screen real estate is only one of the factors in the larger-picture approach that brings greater value to consuming content on the web. Let's not waste those precious pixels! Despite the rise of the mobile and tablet spaces, responsive design can and should exist beyond the world of 960 pixel wide screen sizes. Statistically there remains a huge audience for it.