Sunday, November 30, 2008

My Browser Is A Mess

I am a little late updating as I was trying to get a website revision out the door. Turns out it didn't happen like I wanted to, because of the fact that JavaScript is an abomination of a language that has plunged us back into the software crisis of the 70s. I will explain that when I am able to write better, but it will be a problem if we keep insisting using web browsers as the delivery platform for all new programming, including mobile devices.

One problem I am running into is window management. I will often have two web chat systems, webmail, 8 tabs, and assorted sub-windows, open and running in my web browser. Mentally these are separate applications that I need to keep track of and manage on my screen, but the Operating System shells of the world all treat these windows as belonging to the same program, and show me just one tab in the Start bar (XP) or one entry on the dock (OS X) or when Alt-Tabbing (both).

Something needs to be done here, and I am not sure if the browser developers can do it.

Sunday, November 23, 2008

People Like To Talk. Especially To Each Other.

Again: Twitter is a microblogging service in which a user writes short entries, 140 characters at most. You can send your updates, called 'Tweets', over mobile text message, over IM (when it works), and over their web interface. Twitter users subscribe to each-others streams. To direct a Tweet to a specific person, you put '@person' in front of it; think of the @ as a symbol for the word 'at' here. So to write a Tweet you mean to direct me you would send an update like '@fj My comment is directed at you'. It would still show up publicly in your twitter stream in between your general Tweets, but it would be sent to me too, even if I do not follow you.

I have been noticing something interesting in the tweets of my blogging friends as they create Twitter accounts: first they start off with a Tweet or so a day. Then five a day, maybe one '@someone' Tweet per day in between the rest. Then, suddenly, their daily output is a lot of '@someone' Tweets and a few general ones. You can just see their progression of starting out, slowly, and then suddenly getting caught in the society of Twitter. More interesting to me is that the @someone format exists at all. Twitter was created for people who felt stifled by the conventions and format of standard blogging, where you had to think about entries and have a narrative and worked out thoughts. twitter was just supposed to be random quick thoughtbytes. No comments, no follow-ups, no threading, none of the standard blogging stuff.

Except that people are social. They see stuff, they want to react to it. They want to answer, they want to question. We have these massively complicated brains to manage being social animals who think in symbols and communicate through and about them. People whose brains don't do that like 90% of the rest of the pack do are considered defective and get labels like Aspergers and Autistic, because indeed they will have a hard time being part of the world. We want to talk, and we will work it into every system we can, even if the system does not explicitly allow it. Graffiti gets made as an answer to being in the urban landscape, but then gets re-tagged and over-written. Books get their margins written in and put back in the library. People yell at TVs, often in utter frustration because we know the TV can't hear us. People talk back to movies, less frustrated this time, but are actually talking to other movie go-ers. Billboards get 'defaced'. Wherever we can, we go into dialog. (Incidentally, I do not condone most of the forms of dialog I listed here, I am just stating them for illustration.)

Computers make this two way very easy, and the web sites that are built with it are most successful: the blogging revolution, forums, social networks. Facebook now allows commenting on status updates, which are short statements of the user's state of mind, much like Tweets. But by making explicit commentary fields for status updates, the dialog there is controlled, directed, and far less disjointed than '@someone" Tweets where you miss most of the conversation unless you follow everyone involved. There's no threading, but because of the ephemeral nature of the Facebook news stream so far the amounts comments-on-status seems to still be in the single digits, so they stay manageable.

Making a web system that is about expression but does not include a way to react is not just a lost opportunity, but it is inviting frustration (we want to talk), and then reaction (because we will talk): competing websites, defamatory websites pointing at your service, websites discussing your service. And that means a loss of eyeballs, and also control. Right now, adding a comment system to any webpage has become even easier: in response to the fact that their comment system was looking outdated compared to Disqus and LiveJournal and other commenting systems in blogs or that can be added to blogs, TypePad announced the beta of their own new commenting systems of avatars, profiles, threading, and connections called TypePad Connect. With just a click any TypePad site can migrate to this new system, but, more interesting, Six Apart claims that you can add this system to any web-page with a little JavaScript, not just TypePad blogs.

This is really quite useful. I am right now working as a favor on a little website for the book of a friend, and an important part is the Errata page. There are many (it has cost the book some stars on Amazon) but we know we haven't found them all, and other people will. Do we need to add a form to the page to submit more? We can't add an email address, it would be spammed in seconds after going live. What kind of form? What kind of scripting? Hey, the writer already has a TypePad-based blog. And the migration to TypePad Connect went well. So if the writer just makes a TypePad Connect forum for the Errata page, he will get emails when people leave errors, and we can incorporate them easily. No extra server-side systems for feedback need to be installed, no new passwords, avatars, nothing heavy, just something light that extends previous systems. Will it work? We do not know yet, we haven't tried. And comments are a double-edged sword, they have to be maintained and come with their own spam -- just check the comments on the web-page where I first saw TypePad Connect announced

People like to talk. They will do it where they can. The talking, when managed, is incredibly useful. Harnessing it makes wonderful things happen.

Tuesday, November 18, 2008

Stealing JavaScript

I was talking to a new friend who is starting his own small web agency, recruiting small businesses. It is at this point, a two-person operation: him for sales and another person for the actual creation. "He's all into XHTML, CSS, validation, talks in pixels."
-- "Who does your JavaScript?"
"We just steal that."

It's true. And very much done, and not for JavaScript. I wasn't the only programmer out there who would rather start with a half-baked program or skeleton or template rather than have to fire up the Software Development Environment with a blank page, in any language, not just JavaScript, scouring the web or ftp or textbooks for some piece of source vaguely related to what I wanted my program to do. Good environments like Eclipse will even try to fill in as much as they can for any project for you.

There is some question of ethics there, but in the end we do what we do to get there fastest and not commit to egregious an infringement. Of course, the Open Source movement is all about having this not be a problem: Open Source projects are very explicit you can re-use their computer code, as long as you release your modifications and re-uses and adaptations to the larger community as well. Trying to operationalize this involves much hair-splitting over copyright notices, examining what constitutes re-use and adaptation, when should source code for a computer system be released back to the community or not.

JavaScript sidesteps all this. JavaScript source code is compiled inside the browser, so it is the plain-text source code that gets shipped around when pages are loaded. Sure it can be obfuscated and re-arranged to be unreadable, but a good formatter can handle most of that. So while Open Source developers are worried about infringing and infringement and what license and notice to use when they should make source available with their compiled binaries, JavaScript just gets shared automatically when it is used. There is no binary compiled JavaScript to ship.

The result is that the ethic of 'stealing' JavaScript is widespread, but the fruits can't be hoarded anyway, they get immediately shared. In that way is enforced Open Source, and possibly a very good model for future systems and languages: don't allow binaries to be shipped, just human-readable programs. JAVA is almost there if it didn't insist on its intermediary bytecode form. All that's missing is the 'currency' Open Source proponents say is the one important to them: credit. When the copyright notice gets taken out, so does the credit and kudos about who made this brilliant script, and with that future emplyability. Another system is necessary to keep it.

Saturday, November 15, 2008

A Quick Route To Making E-Ink Readers Useful In The Workplace

Plastic Logic's E-Newspaper

You know, I was just reading the press release announcement of the only E-Paper reader I so far actually like the looks and size of, and I suddenly had this thought after watching the video:

If we really want this reader to quickly replace printed paper, it should come with a paired wireless dongle you plug into any USB port, and identifies the reader as a printer. Not a hard disk. Not some new viewer. A postscript or PDF printer. Easiest way to get your proofs, your in-between versions, your test lay-outs, your read-on-the-plane documents on it. Our whole infrastructure is built for printing documents, every program and operating system can do it, and it needs to be done so often in actual business environments it has been engineered to be (when it works) to be almost foolproof and mindless. Everyone has been trained in printing, and understands it.

Sure the reader should be able to get data like books and newspapers or other documents some other way, preferably as easy and secure and reliable as the Kindle does now. But to really fulfill the promise of cutting down on paper and paper costs, the quick route is to embed this thing in the printing chain, and easily so on every computer. When you walk into someone's cube and you see something, you just plug in the dongle, your personal reader gets added to the OS as a printer, print, unload and unplug, done.

Wednesday, November 12, 2008

Yeah, Neeext!

Dear Sony-Ericsson

800 bucks unlocked for a laggy phone with a finicky keyboard, a backwards screen, and no 3G for one of your target netwrks just because you innovated the Home Screen of the unstable OS?

God. Your company deserves to be in trouble.

Tuesday, November 11, 2008

300 DPI

Detail of 300 DPI test pasted into a Moleskine notebook

Wondering what the real differences would be for mobile UI design when handheld screens end up having a resolution of 300 dpi, I did what every UI designer does: I made a prototype to experiment with. Now, I don't actually have an HTC Touch HD or that new Sharp phone to play with -- and they haven't quite reached the magic resolutions just yet -- but I do have a cheap inkjet printer. Since I expect the form factor of those screens to stay in the small handheld realm, I decided to mock up some User Interfaces on paper for a device the size of my Pocket-size hard-cover Moleskine notebook, which is 9 cm by 14 cm.

So, fired up Word and composed four example screens of what a portable device with a very high resolution could be used for. My first creation was a fake medical record for a diabetic teen, as that is the kind of high-volume mixed-data file I have experience with. Typesetting it at a reasonable resolution for a standard web-page -- 11 point font -- looks just clunky and wastefully big at 300 dpi, and the two data visualizations I added showed me that far higher data densities should be just fine. So the second mock-up is a data-sketch for a ward chart, showing the summaries for 3 beds. (All names fictional, all data fictional if not non-sensical.) The format for the graphs is from Edward Tufte's project for a one-sheet medical record; it normalizes laboratory values to all have the same visual characteristics so that they become easy to glance-read. Turns out that one can make a useful small display like this because of the high data-density. It isn't significantly more text than would fit on the same size 72dpi screen, but it is some, and easier to read. And of course, with the ability to use animation and to expand and collapse charts dynamically, abilities that come from using a handheld over a static sheet of paper, I think a pretty credible medical record viewer could be made to fit such a small form factor. It would be far more comfortable than working with the current available medical tablets and laptops.

Miniaturized version of UI mock up for a 300 DPI handheld
Miniaturized version of UI mock up for a 300 DPI handheld. Click to download the A4-sized PDF.

Two more panels added: one to test using such a device as a magazine, a fashion glossy in this case, and to view snapshots with. The fashion shots are the large photographs from, and once imported into an image program and the resolution is set from 72 to 300 dpi, of course they become much smaller while keeping detail. On paper at least, uncomfortably so, and even enlarged the fashion details are not visible well. I have to wonder if that isn't a problem with using photographs originally prepared for the web, though: when using my snap of the yellow tree at St Paul's Cathedral, which was stored at 300 dpi, screen versus 300 dpi print made quite a difference. At identical sizes, when viewing the PDF on screen, the blooms turn into yellow blobs, while printed out, even on my cheap HP that is using many trick to interpolate a higher resolution than 300 dpi, every flower is separated and sharp. What takes up a whole screen on a computer becomes easily viewable with the same detail in a smaller frame.

Still, nice for pictures, but does it really make a difference for information visualizations? These graphs and text are just as readable on a standard computer screen at the same size, and a 300 DPI screen will not be a full-color reflective experience like paper. And then I noticed a difference: on a standard computer screen, if I wanted to see the detail in a dense graph better, I had to crane my neck to get closer to the screen, or make gestures on the keyboard to zoom in. With the Moleskine, I just held it closer and farther as was comfortable for detail or overviews, while I was standing up or walking around. It doesn't seem like a big thing, but if I was dealing with these kinds of detailed information visualizations all day while going from location to location, I would seriously prefer that.

So, what are the quick lessons for getting ready for when handhelds start having very high resolutions?
  • The photography workflow will need to be changed. Pictures that are downsampled to be 72 or 96 dpi for websites will not use a 300 dpi screen to its full potential to show detail. The workflow for print may be usable here, but the end product or website will need to come in two formats: one for high-resolution, and one for low-resolution screens.

  • A high-res screen doesn't mean you can make the font smaller and add a lot of text this way. Text does become slightly easier to read, but not dramatically so.

  • However more information can be communicated over a 300 dpi screen than over a 96 dpi one, if not in text: the charts and other visualizations can show far more detail and those will be easier to read. They will require more careful design than what is currently the standard in pie and bar and block charts as supplied in Excel et al. So that's where a lot of work needs to go.

Friday, November 07, 2008

Business Net Filters May Be Now Useless

Everyone knows the Web is an awesome time waster, with many addictions waiting to be had, whether it is eBay or fantasy sports teams or gambling or obsessively refreshing political blogs. And what time is better to play with the web then when you are sitting in a chair at a desk both made to enable using a computer efficiently and effectively, at the time of day you are most alert? So when having the Web became a mandatory business tool for everyone in the mid-nineties, businesses were confronted with the fact that they were putting a direct line to brain-crack in front of everyone. Because let's face it, work is boring, and the web is made to not bore.

Enter the business network proxy: a place where all connections going outside the Intranet have to go through, where they can be monitored and filtered. While many computing policies at large companies try to make soothing noises that you do have a life and they understand you sometimes have to check up on it while at work, people have been fired for being on eBay too much. And the way sexual harassment laws work in the USA, an employer can get in trouble for 'creating or allowing a sexually hostile workplace' if there is a lot of sex being browsed on screens on or passed around in email, so that has to be filtered too.

So much can be written about filtering policies and systems, because they always filter out too much and not enough at the same time, and they can never keep up with how the web grows. Still the best policy I have ever heard of for managing people's web use was to not filter anything, but just send new workers an email at the end of their first week of all the sites they browsed and how much time they spent, with a warning that next week's list will be posted, with names, in the department, along with everyone else's weekly list. Sitting behind our desks we forget the network people can and do check how we use that business resource, and just that reminder gets most people in line. I used to bring my own sub-notebook with a wireless CDPD (an early wireless data standard for mobile phones) card because I refused to check my email over the corporate network, nobody but me needed to read that. Yes, it was conspicuous that I wanted outside access, but also, I got my work and more done and people understood I had a private email life.

Now most workplaces think they have some form of control and sex and mayhem are being safely kept out the door, if not all then enough of it. Except that yesterday I was sending a friend a URL of something remarkable I saw on a site, and he complained that ah damn, his company was filtering it -- he was at work while I was not, a time-difference issue. I told him it could wait, and a minute later he responded with "Nope, iPhone 3G to the rescue. Oh wow look at that" etc. Yes, many of us geeksknowledge workers, and an astounding amount of all kinds of other demographics, now carry the full Internet with an ok screen in our pockets and purses. Corporate filters to keep us out of trouble are useless: we all have our own unlimited data-plans and screens to get to it now. I don't think it will be the end of the corporate filtering industry, but boy, HR departments had better prepare new presentations and guidelines about how to use your own data terminals, terminals HR cannot check up on, in the workplace. Mark my words, someone will get a company to seriously pay up because a colleague showed them sexually explicit videos on that colleague's iPhone or G1 or Touch HD, not using the corporate network or equipment, for a second time after having been warned not to do that.

Tuesday, November 04, 2008

Decade Of Finding Context

Um, did nobody notice that the mobile Internet turned 10 years old this summer? I was going over some old documents and realized that I started working on Nokia's WAP Toolkit (later Nokia Mobile Internet Toolkit) in 1999, and at that time Nokia hadn't even released their first WAP phone yet, so WAP couldn't have been too old at that time. And indeedy, I just found a reference to WAP Forum, the organization of telephone companies coming together to formalize the WAP standard for the mobile Internet, formalizing itself somewhere around the Summer of 1998.

10 years of trying, and what we finally are getting is the real Internet, but slowly changing. I can't fault WAP for what it was trying to do; within the context of the time a whole new stack of technologies made total sense to deliver information to the mobile phones. The problem is that they used words like 'Internet' and 'browsing', creating expectations no 4x25 character LCD phone could fulfill.

Since the first days I always heard stories of how WAP would be great not to play Bejeweled with or download and review a spreadsheet, but for getting coupons. "You'd be close to a pizza or a Starbucks and then you'd automatically get a coupon and that would drive traffic into the stores and the stores pay for that, man" was always the usecase bandied about, which in my cynical days made me describe myself as someone who worked in "the Mobile Pizza-Coupon Delivery Industry". The scenario never worked out, because you always had to find out where the phone was before you could send the right coupon, and nobody could make up their minds if the consumer should pull coupons (i.e. have to try to fetch one every time they were in a new shop) or get them pushed (i.e. be bombarded). Both options suck. Neither of them are going to 'surprise and delight' anyone. To put it in terms I'd hear while working at Disney, there was never any magic in delivering coupons. Especially since the scenarios were always about making hapless consumers buy and getting money from shopkeepers for delivering this crap, never fundamentally about making people happy.

What you need to deliver magic is to know the user's context. Fulfill the anticipation. Either by creating the context and creating the anticipation by making the park and the rides, or knowing where the user is and what they want. Location-based services are all about knowing the user's context: where are you, where do you want to go, how can I help you with that, what can I give you at those places? It's not the only piece of context you need to deliver magic, but it is a very important one (which is why Disney controls its contexts in the parks so), one that manufacturers have been trying to put into devices as fast as financially possible, and application developers have pounced on to unlock with applications. A whole industry, GPS-devices / sat-navs, was created in the 90s around this piece of context, a viable and very big industry, because there was such demand to get help when, well, navigating this context. Now in this millennium this industry is being networked so that the devices can update the context live. And thus merging with phones, becaue they do networking and communication.

We want our mobile devices to help us. To really help us, they need to know our context. Location is one piece, and it is making life so much easier. I have moved to enormous cities twice in the last 2 years, and I am navigating them like I have always lived here, saving so much time. But context is more than Location. The next one to lick is Intent: what is the user doing? Can I, as a mobile device, disturb? How can I help? This one will be a bit trickier, and it can wait until Location has been fully exploited and played out. But start thinking about it.