Friday, December 26, 2008

Repair Your Gifts Before They Break

When Microsoft rolled automatic updating functionality into its operating systems so the system could download and install blessed patches and security fixes from Microsoft automatically, a lot of computer people were very worried that this could be used by bad people for bad ends. If the computer could be tricked into thinking that some other site was Microsoft, the computer would happily install bad code in the bowels of the operating system where nobody could find it, change it, or remove it.

Of course, it turns out now for me, the baddies I should have worried about was Microsoft itself. Recently my subnotebook updated itself to SP3, and then refused to start up. It would find the boot block, start loading up, and then bluescreen, and immediately reset and try to boot again. The blue screen would come and go so fast I couldn't read what the actual problem was. I actually had to photograph it with the camphone to get to read the error, trying repeatedly as the computer went through failed boot cycles so I could get a snap at just the right moment that the error was on screen. Which is why commuters on the train into London could watch this guy repeatedly take pictures of the screen of his laptop with his phone; fortunately for my ego they pretended not to notice. When I zoomed in on the picture on my phone, I could read the error was UNMOUNTABLE_BOOT_VOLUME . Somehow Microsoft sent me an update that rendered its own operating system unreadable to itself. Meanwhile Microsoft auto-update seems to not have made a dent in the swarms of compromised Microsoft-running machines that have been taken over to clog the Internet with spam. Things aren't working as planned.

The fix for this is to pop in the XP disk and run the repair console to start fdisk, a program that repairs the hard disk files. Alas, in the last move I lost my XP restore disks, and, well, I wouldn't be able to use them anyway. This is a subnotebook: no floppy, no CD-ROM. Just like so many netbooks that have been unwrapped under Christmas trees this season.

Well, I could boot from a USB stick, but all I had was a 1st gen iPod shuffle, the white one that looks like a pack of gum, and a Windows 2000 laptop in the corner. I now know how to make this stick bootable in Linux -- but that is no help to repair an NTFS system -- and then at last I found a program to format memory cards, floppy disks, and USB pendrives to bootable DOS systems with chkdsk on it. With this I managed to make the subnotebook boot off the iPod, run chkdsk a few times, and then restore my OS. I just condensed 4 hours of trying and Googling on Christmas Eve into a paragraph.

So, dear readers who ended up with a netbook, do yourselves a favor now that Giftmas is an extra long weekend: go rummage for one of those USB pendrives you have left over, and read up on your netbook's operating system how to make a rescue or boot disk on your USB stick. Then label the stick clearly and do not throw it in some drawer, but put it where you will look when problems arise, like the shelf with the manuals and warranty. You'll thank me when a hard knock or unknown update kills your operating system or drive, and you want to just rescue files or do a disk check. It's no fun to be unprepared for these, and finally all those extra USB stiks become handy.

Sunday, December 21, 2008

Musings On Cameras

Another 'Hysterical Crying Girl' video clip is making the rounds, this time a US student who is, well, strongly reconsidering her idea of using the fire extinguishers to 'make it snow' in her house so she could delight her sorority sisters with snow angels.

Ok, if the idea sounds dumb, you should see the dignity with which she reflects back on her stunt. Or lack of it. What struck me is the editorial commentary on the Jezebel page around the video. Key phrase for me:
"Why it might be fake: Some of her mannerisms/utterances seem so over the top that they feel actress-y."
My thought to that is, well, at this point, somebody being 'actress-y' on video might just be what we should expect from people under 25, even when expressing genuine emotions.

The current tweens, teens, and twenty-somethings have basically grown up being registered on digital media. Not just digital snapshots anymore, all results being instantly uploaded to Facebook, Bebo, and other social sites, but moving pictures, yourself on TV. Being able to pose and look interesting on command when you see a lens for a static shot, no matter what the occasion, is only a basic skill nowadays. Whereas most people born before 1980 have grave problems just getting a passable yet still stiff, grin on an image, most everyone born after that knows how to generate a pose, a grin, a group or gang sign, a physical contortion to show the best curves or shape or social identifier, a group hug and pile-up to look like a happening team, the moment a picture might be taken. It's the result of digital cameras and their instant results becoming so ubiquitous, as well as a now relentless and ubiquitous celebrity culture of professional posers being available for these boys and girls and young men and women to model themselves on with regards to how to pose when you see a lens.

And now webcams are everywhere, built into almost every notebook worth the name, used to chat and show off and record, and constantly being used, including, and heavily, by the current teens, people in a stage of life where you are consumed with exploring yourself, and that has its effects. It's not just stories about kids snapping naughty shots of themselves getting into trouble, but the phenomenon is broader than that: I recently was told the anecdote by a father, currently on an extended gig away from his children, about how when he tries to have a video conversation with them on Skype, he can see his teen daughter looking more at the window showing the feed from her camera than at the window of him, turning her face and shoulders as she, almost compulsively but without consciously noticing, is trying out poses, exploring her face, exploring how she comes across, instead of paying full or even half attention to her conversation partner.

No outward expression of emotion is 'pure'; very early on children begin to model how to laugh and cry and be angry on their environment, which they are trying to communicate with after all. We change how we laugh and cry and show anger depending on what surrounds us, what the norms and manners are, what we get exposed to, and told is normal. I could see it in my nephews 5 years ago when they suddenly started being exposed at age 9 to animes, how they stomped their feet when they wanted to express frustration, jerked their shoulders or grimaced anger in what seemed really exaggerated cartoony ways but fit perfectly with the aesthetic of exaggerated cartoony emotional reaction shots as seen of what they were seeking out to watch on TV. The modeling isn't even conscious, but we are social animals: we do what we see.

So what would the model be for how to show your emotions on video? What will the YouTube Skype generation model itself on, without even noticing, to come across effectively during all their cam sessions? Acting. The results of what they see on TV. The short-hand to communicating internal emotional states effectively and directly they see every day. Most of it what we would call Bad Acting, no less. When you are emoting in front of a lens, that is how you project to look like, what we have been taught crying and laughing and angry people look like on a screen. Broad, and dramatic, meant to jump across technology. And that they model their expressions after having seen themselves on camera for years, and how other people 'act' on reality TV shows for years in a further round in this dialog between people and media, doesn't mean they don't have genuine internal feelings. It means they just have been influenced heavily by the technology around them, and internalized its messages.

If sorority girl is 'actress-y' in how she cries, it may indeed signify that she is a complete fake, a real actor paid by an ad agency to contort her face to relate this anguish for some commercial reason. (But no actor would cry that 'fake looking'.) It may also signify that this girl has already spent so much time in front of a camera she is unable to not adapt how she carries herself, unable to not be 'actress-y', when one is pointed at her.

Sunday, December 14, 2008

Job Flash

I just saw on some digest the following job title go by:

Business Development Manager (Missile Defense + Secret)Omaha

and I briefly had images of trying to sell ad-space on the side of missiles. Meetings of men in pressed jeans and khaki pants talking earnestly on how to monetize explosions. The word 'impact' took on a whole new meaning.

Gawd has the .com revolution and culture done a number on us in it.

Wednesday, December 10, 2008

Thoughts After A Monday

Mobile Monday London event this Monday. There are a number of Mobile Monday events in large cities with Mobile Software practitioners, and this one in London was two nights ago. We met for two panels in a lecture hall (they call them Lecture Theaters here) at the Imperial College in South Kensington. As I told the gentleman in the seat on my right to break the ice: "I haven't been in one of these in 14 years, and certainly not in this country, but they all look the same and I am having flashbacks. Most of them traumatic." It really sometimes does feel like all Universities and Colleges are the same building with some cosmetic differences. After the two panels, a mixer for drinks and networking.

The first panel was on Mobile Social Media, some panelists couldn't focus on anything but the 18-35 market segment, until I actually asked if there was any thought at all about whether the retired affluent segment might want something to do with social media. You know, the people constantly showing off their grandkids? Nothing, really. Simon Lawson did have the interesting comment that what was holding the older population back from participating in new media was: "It's taxonomy. As soon as you have to learn a bunch of new terms you put people off. When I explained twitter as "you can text me and my brother at the same time" she got it."

Which made me think of how technology always gets explained in analogies (to the point that no slashdot post about the social impact of technology is complete with a car analogy), necessary until the technology actually gets used and internalized. Still, there's an analogy for every new technology explained in old technology, which must have had an analogy to previous tech when it was new, and thus turtles all the way down until you can explain Facebook in very very very long terms of using a stick and lightning.

Still, we have to wonder if the older segment is underrepresented as users of mobile technology because they are not open for new things, or because they will not put up with  crap. Considering my 76-year-old father sends me plenty of SMSes and can't wait until I hit the Netherlands to show him more about his MacBook, it's not exactly the former. New technology that works is nice when it helps you stay in touch. Of course, the phone he uses is a big-buttoned, 4 lines of 20 characters black & green LCD indestructible Nokia from the late nineties, and he is happy with it. He 'gets' the phone, and the phone works. My current phones often don't even work fully for me, yet I put up with it because of the promise that the parts I do get will bring me cool experiences. Sometimes I think us early adpters are just too patient, that if we were more brutal manufacturers would try harder.

I am starting to think older set needs to be the new focus group to evaluate whether a new social technology is a fad or will have a lating impact. If it doesn't work for them, the whole idea and execution is just not robust enough.

Thursday, December 04, 2008

Dear Sun

I tried. I really did. Based on the latest cheerleading on C|Net, and since I am not averse to coding a UI as well as specifying it, I tried JavaFX. I almost didn't when your first three 'Getting Started" videos I saw were of less-than-totally telegenic people cheerleading instead of showing me process or how to actually do something, but I went ahead anyway.

Downloaded your plug-in into the most popular development environment (yeah, it's not your own Netbeans) and in the most standard project configuration, and bam, it doesn't work. Turns out on Eclipse (3.3.2 Mac OSX) you have to select that projects should put output in the same directory as the source when defining a project, or you will get a "[filename].fx not found" error.

But then comes the killer. You want your JavaFX to be the new Flash, the new quick and easy way to make nice multi-media apps for the web. Except all the demos just end up as Java programs. You know, applets. Pieces of the web that, when your browser encounters them, make everything grind to a halt, worse than hitting a PDF, as the system tries to start the JAVA sub-system. Look, I have made user intefaces in JAVA. Huge ones. I coded 70% of every line and supervised the other 30%. Yes, JavaFX seems to make it easier to make pretty ones. But the end product is just going to be as bad to run inside a browser as anything else JAVA, and that is not what I want out of life. If I had a huge investment in JAVA code I might be more patient, but still... Oh ok, who am I kidding, if someone payed me I'd seriously take it up. What I want is to be able to make apps that run as fast and ubiquitously as Flash so I think I will try Flex for my project, thanks.

Also, that per-unit licensing fee for the runtime on mobile phones? Mobile-phone makers don't like per-unit fees. Makes development costs unpredictable.

Hey 3d-Party Android Coders!

Are y'all sure you didn't lock your applications to the screensize of the G1? You know, that big 480 x 320? Because if you did, they won't really work here on the world's second Android phone.


The Agora phone by Kogan. 320x240 touch screen. No, I have never heard of them either


Oh, some of you didn't? Some of you made a sort of dashboard one-glance app that is now cut in half? Or a game where the baddies appear at the edge of the screen? And you are thinking now, oh who cares, nobody will ever buy that phone, I will just stay with my G1 platform specs?

Probably a good decision right now. But all us WAP weenies from the WML / WMLScript days just want to say: "We feel your pain. Even as you are now trying to shrug it off."


Edit: Yup, I am not making this up. Android-app coders indeed weren't ready for multiple form factors. And this actually will make a difference when wondering whether to develop for iPhones or Android.

Wednesday, December 03, 2008

N97 Follow-Up

First of all, for my US readers whose iPhone was their first introduction to a smartphone, yes, that is a camera on the front of the N97. You're supposed to use it for 3G video calls. I don't know if anyone ever does in any kind of numbers, certainly not in the UK. It is awkwardly positioned: you either have to hold the phone right in front of your face making your arm tired, or you are sending a lovely picture of your chin.

The real camera is at the back, a 5 megapixel sensor behind Carl Zeiss optics. 5 is considered a let down now in a flagship phone, especially since Samsung is shipping 8 and 10 megapixel sensors, but in reality, all those pixels just let you print bigger pictures, which nobody really does, and for the web all those extra pixels just mean you have to shrink the image even more to make it fit your Hyves / MySpace / Facebook etc. Plus a denser sensor just introduces more noise into the picture. Seriously, if device manufacturers want to offer making better pictures, camphones are going to have to be designed around bigger sensors and bigger lenses, and not continue to cram more electronics on the same size sensor chips. The shutter button for the back camera is on the side, so the whole phone works like a point-and-shoot camera.

As for an application download store like the iPhone and Android devices have, well, Nokia already has one. Has had one for years for the N Series, and it barely got any press. It's called 'Download!', and lives on the top level menu screen. It's totally annoying to use because every step drilling down into the hierarchy requires an explicit and long round-trip to the server, the round-trips often fail suddenly erasing half the store, items are organized by publisher which means you can't get a list of all games or all books, it all looks like somebody threw up tiny cartoony icons and cutesy names and expects that to be enough information to spend £5, there are no working previews, and in the US it actually asked me to enter my credit card data with the keyboard and in the UK makes the premium SMS billing really confusing (I get two SMSes per purchase). Right now on my N73 on T-Mobile, every single attempt to browse the catalog is failing. It's shameful.

If Nokia wants an app store to compete, they will have to

  • demand the N97 only gets sold with data plans where the user won't have to care about extra charges when browsing the catalog and downloading applications

  • have a back-end that allows the store to work reliably al the time

  • streamline the process so 3d party developers can get on the store

  • make freeware or try-ware possible

  • streamline the process to make the store global

  • make payments easy everywhere on the planet

Yeah, it will take some work.

Tuesday, December 02, 2008

Nokia Has Me Again

Yeah, the N97, announced today.


No wait, that rumor got it wrong It's this, now formally announced:


Nokia 97


The memory size of 32GB is indeed astounding, until you realize that is pretty much what Apple is rumored to do for iPhone 3 in 2009. Which is bound to come out at the same time, because Nokia has only committed to H1 09, and that ain't Q1 09. That means Q2. And from the arms length they kept the press crew at during the unveiling today, it's obvious for Nokia watchers this thing needs plenty of work done before release. On the other hand, do realize that phone software gets worked on to the very last moment, when it suddenly congeals from pieces into something, and in the last months suddenly becomes stable and great, where 6 weeks earlier you may have wondered HTF they were going to release this on time. It's just how that works.

Having a keyboard is great, I think it actually makes people who are heavy text users likelier to buy, because the skepticism towards on-screen keyboards always remains, and with a slider like this there's no compromise with screen real estate. Which, by my calculations, is around 190 to 200 ppi, pretty sharp, pretty high. It will make that dashboard of your online social life Nokia's pushing on these screenshots actually pretty usable while in the press shots on your standard monitor it looks like a blurry mess.

But is it any good? Will people flock to it? The only thing, IMHO, that decides that in Europe is cost. Seriously. I have seen so many people complain about their slow under-performing phones, to then hear that they just went for the free model because they didn't want to pay. If Nokia wants this to outsell the iPhone, it needs carriers to offer it for the same price, and the same plans, including Don't-Have-To-Think-About-It-Data. And only then will software quality matter. If the quality and integration and snappyness and consistency are abysmal, people will bring the phone back and buy something else. If it is acceptable, people will keep them.

Just make sure they demo well in the shop.

Sunday, November 30, 2008

My Browser Is A Mess

I am a little late updating as I was trying to get a website revision out the door. Turns out it didn't happen like I wanted to, because of the fact that JavaScript is an abomination of a language that has plunged us back into the software crisis of the 70s. I will explain that when I am able to write better, but it will be a problem if we keep insisting using web browsers as the delivery platform for all new programming, including mobile devices.

One problem I am running into is window management. I will often have two web chat systems, webmail, 8 tabs, and assorted sub-windows, open and running in my web browser. Mentally these are separate applications that I need to keep track of and manage on my screen, but the Operating System shells of the world all treat these windows as belonging to the same program, and show me just one tab in the Start bar (XP) or one entry on the dock (OS X) or when Alt-Tabbing (both).

Something needs to be done here, and I am not sure if the browser developers can do it.

Sunday, November 23, 2008

People Like To Talk. Especially To Each Other.

Again: Twitter is a microblogging service in which a user writes short entries, 140 characters at most. You can send your updates, called 'Tweets', over mobile text message, over IM (when it works), and over their web interface. Twitter users subscribe to each-others streams. To direct a Tweet to a specific person, you put '@person' in front of it; think of the @ as a symbol for the word 'at' here. So to write a Tweet you mean to direct me you would send an update like '@fj My comment is directed at you'. It would still show up publicly in your twitter stream in between your general Tweets, but it would be sent to me too, even if I do not follow you.

I have been noticing something interesting in the tweets of my blogging friends as they create Twitter accounts: first they start off with a Tweet or so a day. Then five a day, maybe one '@someone' Tweet per day in between the rest. Then, suddenly, their daily output is a lot of '@someone' Tweets and a few general ones. You can just see their progression of starting out, slowly, and then suddenly getting caught in the society of Twitter. More interesting to me is that the @someone format exists at all. Twitter was created for people who felt stifled by the conventions and format of standard blogging, where you had to think about entries and have a narrative and worked out thoughts. twitter was just supposed to be random quick thoughtbytes. No comments, no follow-ups, no threading, none of the standard blogging stuff.

Except that people are social. They see stuff, they want to react to it. They want to answer, they want to question. We have these massively complicated brains to manage being social animals who think in symbols and communicate through and about them. People whose brains don't do that like 90% of the rest of the pack do are considered defective and get labels like Aspergers and Autistic, because indeed they will have a hard time being part of the world. We want to talk, and we will work it into every system we can, even if the system does not explicitly allow it. Graffiti gets made as an answer to being in the urban landscape, but then gets re-tagged and over-written. Books get their margins written in and put back in the library. People yell at TVs, often in utter frustration because we know the TV can't hear us. People talk back to movies, less frustrated this time, but are actually talking to other movie go-ers. Billboards get 'defaced'. Wherever we can, we go into dialog. (Incidentally, I do not condone most of the forms of dialog I listed here, I am just stating them for illustration.)

Computers make this two way very easy, and the web sites that are built with it are most successful: the blogging revolution, forums, social networks. Facebook now allows commenting on status updates, which are short statements of the user's state of mind, much like Tweets. But by making explicit commentary fields for status updates, the dialog there is controlled, directed, and far less disjointed than '@someone" Tweets where you miss most of the conversation unless you follow everyone involved. There's no threading, but because of the ephemeral nature of the Facebook news stream so far the amounts comments-on-status seems to still be in the single digits, so they stay manageable.

Making a web system that is about expression but does not include a way to react is not just a lost opportunity, but it is inviting frustration (we want to talk), and then reaction (because we will talk): competing websites, defamatory websites pointing at your service, websites discussing your service. And that means a loss of eyeballs, and also control. Right now, adding a comment system to any webpage has become even easier: in response to the fact that their comment system was looking outdated compared to Disqus and LiveJournal and other commenting systems in blogs or that can be added to blogs, TypePad announced the beta of their own new commenting systems of avatars, profiles, threading, and connections called TypePad Connect. With just a click any TypePad site can migrate to this new system, but, more interesting, Six Apart claims that you can add this system to any web-page with a little JavaScript, not just TypePad blogs.

This is really quite useful. I am right now working as a favor on a little website for the book of a friend, and an important part is the Errata page. There are many (it has cost the book some stars on Amazon) but we know we haven't found them all, and other people will. Do we need to add a form to the page to submit more? We can't add an email address, it would be spammed in seconds after going live. What kind of form? What kind of scripting? Hey, the writer already has a TypePad-based blog. And the migration to TypePad Connect went well. So if the writer just makes a TypePad Connect forum for the Errata page, he will get emails when people leave errors, and we can incorporate them easily. No extra server-side systems for feedback need to be installed, no new passwords, avatars, nothing heavy, just something light that extends previous systems. Will it work? We do not know yet, we haven't tried. And comments are a double-edged sword, they have to be maintained and come with their own spam -- just check the comments on the web-page where I first saw TypePad Connect announced

People like to talk. They will do it where they can. The talking, when managed, is incredibly useful. Harnessing it makes wonderful things happen.

Tuesday, November 18, 2008

Stealing JavaScript

I was talking to a new friend who is starting his own small web agency, recruiting small businesses. It is at this point, a two-person operation: him for sales and another person for the actual creation. "He's all into XHTML, CSS, validation, talks in pixels."
-- "Who does your JavaScript?"
"We just steal that."

It's true. And very much done, and not for JavaScript. I wasn't the only programmer out there who would rather start with a half-baked program or skeleton or template rather than have to fire up the Software Development Environment with a blank page, in any language, not just JavaScript, scouring the web or ftp or textbooks for some piece of source vaguely related to what I wanted my program to do. Good environments like Eclipse will even try to fill in as much as they can for any project for you.

There is some question of ethics there, but in the end we do what we do to get there fastest and not commit to egregious an infringement. Of course, the Open Source movement is all about having this not be a problem: Open Source projects are very explicit you can re-use their computer code, as long as you release your modifications and re-uses and adaptations to the larger community as well. Trying to operationalize this involves much hair-splitting over copyright notices, examining what constitutes re-use and adaptation, when should source code for a computer system be released back to the community or not.

JavaScript sidesteps all this. JavaScript source code is compiled inside the browser, so it is the plain-text source code that gets shipped around when pages are loaded. Sure it can be obfuscated and re-arranged to be unreadable, but a good formatter can handle most of that. So while Open Source developers are worried about infringing and infringement and what license and notice to use when they should make source available with their compiled binaries, JavaScript just gets shared automatically when it is used. There is no binary compiled JavaScript to ship.

The result is that the ethic of 'stealing' JavaScript is widespread, but the fruits can't be hoarded anyway, they get immediately shared. In that way is enforced Open Source, and possibly a very good model for future systems and languages: don't allow binaries to be shipped, just human-readable programs. JAVA is almost there if it didn't insist on its intermediary bytecode form. All that's missing is the 'currency' Open Source proponents say is the one important to them: credit. When the copyright notice gets taken out, so does the credit and kudos about who made this brilliant script, and with that future emplyability. Another system is necessary to keep it.

Saturday, November 15, 2008

A Quick Route To Making E-Ink Readers Useful In The Workplace


Plastic Logic's E-Newspaper


You know, I was just reading the press release announcement of the only E-Paper reader I so far actually like the looks and size of, and I suddenly had this thought after watching the video:


If we really want this reader to quickly replace printed paper, it should come with a paired wireless dongle you plug into any USB port, and identifies the reader as a printer. Not a hard disk. Not some new viewer. A postscript or PDF printer. Easiest way to get your proofs, your in-between versions, your test lay-outs, your read-on-the-plane documents on it. Our whole infrastructure is built for printing documents, every program and operating system can do it, and it needs to be done so often in actual business environments it has been engineered to be (when it works) to be almost foolproof and mindless. Everyone has been trained in printing, and understands it.

Sure the reader should be able to get data like books and newspapers or other documents some other way, preferably as easy and secure and reliable as the Kindle does now. But to really fulfill the promise of cutting down on paper and paper costs, the quick route is to embed this thing in the printing chain, and easily so on every computer. When you walk into someone's cube and you see something, you just plug in the dongle, your personal reader gets added to the OS as a printer, print, unload and unplug, done.

Wednesday, November 12, 2008

Yeah, Neeext!

Dear Sony-Ericsson

800 bucks unlocked for a laggy phone with a finicky keyboard, a backwards screen, and no 3G for one of your target netwrks just because you innovated the Home Screen of the unstable OS?

God. Your company deserves to be in trouble.

Tuesday, November 11, 2008

300 DPI

Detail of 300 DPI test pasted into a Moleskine notebook


Wondering what the real differences would be for mobile UI design when handheld screens end up having a resolution of 300 dpi, I did what every UI designer does: I made a prototype to experiment with. Now, I don't actually have an HTC Touch HD or that new Sharp phone to play with -- and they haven't quite reached the magic resolutions just yet -- but I do have a cheap inkjet printer. Since I expect the form factor of those screens to stay in the small handheld realm, I decided to mock up some User Interfaces on paper for a device the size of my Pocket-size hard-cover Moleskine notebook, which is 9 cm by 14 cm.

So, fired up Word and composed four example screens of what a portable device with a very high resolution could be used for. My first creation was a fake medical record for a diabetic teen, as that is the kind of high-volume mixed-data file I have experience with. Typesetting it at a reasonable resolution for a standard web-page -- 11 point font -- looks just clunky and wastefully big at 300 dpi, and the two data visualizations I added showed me that far higher data densities should be just fine. So the second mock-up is a data-sketch for a ward chart, showing the summaries for 3 beds. (All names fictional, all data fictional if not non-sensical.) The format for the graphs is from Edward Tufte's project for a one-sheet medical record; it normalizes laboratory values to all have the same visual characteristics so that they become easy to glance-read. Turns out that one can make a useful small display like this because of the high data-density. It isn't significantly more text than would fit on the same size 72dpi screen, but it is some, and easier to read. And of course, with the ability to use animation and to expand and collapse charts dynamically, abilities that come from using a handheld over a static sheet of paper, I think a pretty credible medical record viewer could be made to fit such a small form factor. It would be far more comfortable than working with the current available medical tablets and laptops.

Miniaturized version of UI mock up for a 300 DPI handheld
Miniaturized version of UI mock up for a 300 DPI handheld. Click to download the A4-sized PDF.


Two more panels added: one to test using such a device as a magazine, a fashion glossy in this case, and to view snapshots with. The fashion shots are the large photographs from Style.com, and once imported into an image program and the resolution is set from 72 to 300 dpi, of course they become much smaller while keeping detail. On paper at least, uncomfortably so, and even enlarged the fashion details are not visible well. I have to wonder if that isn't a problem with using photographs originally prepared for the web, though: when using my snap of the yellow tree at St Paul's Cathedral, which was stored at 300 dpi, screen versus 300 dpi print made quite a difference. At identical sizes, when viewing the PDF on screen, the blooms turn into yellow blobs, while printed out, even on my cheap HP that is using many trick to interpolate a higher resolution than 300 dpi, every flower is separated and sharp. What takes up a whole screen on a computer becomes easily viewable with the same detail in a smaller frame.

Still, nice for pictures, but does it really make a difference for information visualizations? These graphs and text are just as readable on a standard computer screen at the same size, and a 300 DPI screen will not be a full-color reflective experience like paper. And then I noticed a difference: on a standard computer screen, if I wanted to see the detail in a dense graph better, I had to crane my neck to get closer to the screen, or make gestures on the keyboard to zoom in. With the Moleskine, I just held it closer and farther as was comfortable for detail or overviews, while I was standing up or walking around. It doesn't seem like a big thing, but if I was dealing with these kinds of detailed information visualizations all day while going from location to location, I would seriously prefer that.

So, what are the quick lessons for getting ready for when handhelds start having very high resolutions?
  • The photography workflow will need to be changed. Pictures that are downsampled to be 72 or 96 dpi for websites will not use a 300 dpi screen to its full potential to show detail. The workflow for print may be usable here, but the end product or website will need to come in two formats: one for high-resolution, and one for low-resolution screens.

  • A high-res screen doesn't mean you can make the font smaller and add a lot of text this way. Text does become slightly easier to read, but not dramatically so.

  • However more information can be communicated over a 300 dpi screen than over a 96 dpi one, if not in text: the charts and other visualizations can show far more detail and those will be easier to read. They will require more careful design than what is currently the standard in pie and bar and block charts as supplied in Excel et al. So that's where a lot of work needs to go.

Friday, November 07, 2008

Business Net Filters May Be Now Useless

Everyone knows the Web is an awesome time waster, with many addictions waiting to be had, whether it is eBay or fantasy sports teams or gambling or obsessively refreshing political blogs. And what time is better to play with the web then when you are sitting in a chair at a desk both made to enable using a computer efficiently and effectively, at the time of day you are most alert? So when having the Web became a mandatory business tool for everyone in the mid-nineties, businesses were confronted with the fact that they were putting a direct line to brain-crack in front of everyone. Because let's face it, work is boring, and the web is made to not bore.

Enter the business network proxy: a place where all connections going outside the Intranet have to go through, where they can be monitored and filtered. While many computing policies at large companies try to make soothing noises that you do have a life and they understand you sometimes have to check up on it while at work, people have been fired for being on eBay too much. And the way sexual harassment laws work in the USA, an employer can get in trouble for 'creating or allowing a sexually hostile workplace' if there is a lot of sex being browsed on screens on or passed around in email, so that has to be filtered too.

So much can be written about filtering policies and systems, because they always filter out too much and not enough at the same time, and they can never keep up with how the web grows. Still the best policy I have ever heard of for managing people's web use was to not filter anything, but just send new workers an email at the end of their first week of all the sites they browsed and how much time they spent, with a warning that next week's list will be posted, with names, in the department, along with everyone else's weekly list. Sitting behind our desks we forget the network people can and do check how we use that business resource, and just that reminder gets most people in line. I used to bring my own sub-notebook with a wireless CDPD (an early wireless data standard for mobile phones) card because I refused to check my email over the corporate network, nobody but me needed to read that. Yes, it was conspicuous that I wanted outside access, but also, I got my work and more done and people understood I had a private email life.

Now most workplaces think they have some form of control and sex and mayhem are being safely kept out the door, if not all then enough of it. Except that yesterday I was sending a friend a URL of something remarkable I saw on a site, and he complained that ah damn, his company was filtering it -- he was at work while I was not, a time-difference issue. I told him it could wait, and a minute later he responded with "Nope, iPhone 3G to the rescue. Oh wow look at that" etc. Yes, many of us geeksknowledge workers, and an astounding amount of all kinds of other demographics, now carry the full Internet with an ok screen in our pockets and purses. Corporate filters to keep us out of trouble are useless: we all have our own unlimited data-plans and screens to get to it now. I don't think it will be the end of the corporate filtering industry, but boy, HR departments had better prepare new presentations and guidelines about how to use your own data terminals, terminals HR cannot check up on, in the workplace. Mark my words, someone will get a company to seriously pay up because a colleague showed them sexually explicit videos on that colleague's iPhone or G1 or Touch HD, not using the corporate network or equipment, for a second time after having been warned not to do that.

Tuesday, November 04, 2008

Decade Of Finding Context

Um, did nobody notice that the mobile Internet turned 10 years old this summer? I was going over some old documents and realized that I started working on Nokia's WAP Toolkit (later Nokia Mobile Internet Toolkit) in 1999, and at that time Nokia hadn't even released their first WAP phone yet, so WAP couldn't have been too old at that time. And indeedy, I just found a reference to WAP Forum, the organization of telephone companies coming together to formalize the WAP standard for the mobile Internet, formalizing itself somewhere around the Summer of 1998.

10 years of trying, and what we finally are getting is the real Internet, but slowly changing. I can't fault WAP for what it was trying to do; within the context of the time a whole new stack of technologies made total sense to deliver information to the mobile phones. The problem is that they used words like 'Internet' and 'browsing', creating expectations no 4x25 character LCD phone could fulfill.

Since the first days I always heard stories of how WAP would be great not to play Bejeweled with or download and review a spreadsheet, but for getting coupons. "You'd be close to a pizza or a Starbucks and then you'd automatically get a coupon and that would drive traffic into the stores and the stores pay for that, man" was always the usecase bandied about, which in my cynical days made me describe myself as someone who worked in "the Mobile Pizza-Coupon Delivery Industry". The scenario never worked out, because you always had to find out where the phone was before you could send the right coupon, and nobody could make up their minds if the consumer should pull coupons (i.e. have to try to fetch one every time they were in a new shop) or get them pushed (i.e. be bombarded). Both options suck. Neither of them are going to 'surprise and delight' anyone. To put it in terms I'd hear while working at Disney, there was never any magic in delivering coupons. Especially since the scenarios were always about making hapless consumers buy and getting money from shopkeepers for delivering this crap, never fundamentally about making people happy.

What you need to deliver magic is to know the user's context. Fulfill the anticipation. Either by creating the context and creating the anticipation by making the park and the rides, or knowing where the user is and what they want. Location-based services are all about knowing the user's context: where are you, where do you want to go, how can I help you with that, what can I give you at those places? It's not the only piece of context you need to deliver magic, but it is a very important one (which is why Disney controls its contexts in the parks so), one that manufacturers have been trying to put into devices as fast as financially possible, and application developers have pounced on to unlock with applications. A whole industry, GPS-devices / sat-navs, was created in the 90s around this piece of context, a viable and very big industry, because there was such demand to get help when, well, navigating this context. Now in this millennium this industry is being networked so that the devices can update the context live. And thus merging with phones, becaue they do networking and communication.

We want our mobile devices to help us. To really help us, they need to know our context. Location is one piece, and it is making life so much easier. I have moved to enormous cities twice in the last 2 years, and I am navigating them like I have always lived here, saving so much time. But context is more than Location. The next one to lick is Intent: what is the user doing? Can I, as a mobile device, disturb? How can I help? This one will be a bit trickier, and it can wait until Location has been fully exploited and played out. But start thinking about it.

Friday, October 31, 2008

Density Changes The Visuals

In 1996, while I was knee-deep in medical computing, one of the press releases that caught my eye was about Xerox spinning off a display company for displays with a very high pixel density, around 300 pixels per inch. Your average computer display has 72 pixels per inch, newer ones do 96. 300 pixels per inch is in the league of newspapers and the first home laserprinters to be deemed 'good enough for professional use'. This means the dots used on a computer screen are very big compared to paper, which is why a straight diagonal line on a screen looks jagged unless smoothing tricks like 'anti-aliasing' are used. You can make very detailed graphs and charts at 300 dpi, which you cannot at 72 or 96 dpi, which is why you can still cram more static information on a printed sheet than on a computer screen. So this new company was making displays comparable to paper, which they were positioning as display devices for medical use like reading electronic X-rays, or for the defense industry. Meanwhile I was facing the problem that there was no way I could put all the information contained in a single front sheet of a medical record, with its graphs and annotations, on a computer screen. Especially since 640x480 was a good screen then.


Sharp's 'FULLTOUCH 931SH' mobile phone for Japan


Well, it is ten years later, and Sharp has just announced this phone that, at 1024 x 480 in a 3"8 diagonal screen is getting very close to that magic 300dpi number. Ok, first of all, that means the graphic chip inside that box could probably get away with less aggressive anti-aliasing routines, as jagged lines look far less jagged when the dots making up the line are so small. Also, you could probably display 6 point lettering and have it still be readable.

This display is still not a reflective dislay like a piece of paper is; a display like this emits light from behind the lettering which makes it less easy to read than a well lit piece of paper. And making displays of this density is really difficult -- with every extra pixel added the chance a pixel on the screen won't work goes up -- so yields are probably very low unless you keep the displays small, so we won't have our A4 / Legal-sized screens of newspaper density any time soon. But still, this is getting really close to a properly useful handheld medical record, or handheld inventory list, or handheld visualizer of complex financial data, or just a darn good comics reader that does the artwork justice. Charts and graphs and lay-outs currently used on computer screens are just not up to using the visualization potential of these densities. It's time to look at the most intense information visualization techniques used on paper, the ones before desktop computing took off, and evolve from there, like Edward Tufte has done all through these display-oriented decades. Densities like this will also accelerate the move away from the cartoonesque user interfaces and more towards photographic realism, where objects look like their real-world counterparts, black outlines around elements are no longer necessary, and textures like water, leather, fur, and metal will be rendered so close to realistic they might actualy look good as backgrounds.

Friday, October 24, 2008

Auction, Lottery, Auction, Lottery?

We all know how eBay and its auction system took the internet world by storm. A seller puts something up for sale, set an (undisclosed) minimum price, and bidders enter what the maximum price is they are willing to pay. The computer checks all bids against the outstanding price, and if there is more than one bid the computer automatically increases each bid on the bidder's behalf by the minimum increment over and over until the bids are above all maximum prices but one, and thus only one bidder remains. Bidders who lost out are encouraged to increase their maximum price, and there's a time limit to how long the auction runs.

Since I have moved to the UK and am, as always, looking for cheap stuff here, I have found many other online auction models being used. Like:
  • No time limits on auctions: every bid makes the auction price go up by a penny and the auction extend just a little longer. This means everyone gets to bid over and over and over and over. Oh, by the way, you have to pay £1.50 or so per bid. Basically you win when everyone else but you involved thinks they have spent enough on buying bids, including people who just logged on and are now bidding fresh this minute on something you have been spending pounds and pounds to bid on for the last hour. On eBay you are only out of money when you win the auction, here you could make 100 one-penny-increment bids, thus be out of £150, and be sniped by the person who just logged in today. Are you gonna make another bid and pay £151?

  • Lowest unique bid wins. There is a time limit on how long the auction runs. Place as many bids as you like for various price points, but it costs to place a bid. The costs to place a bid differs per item, placing a bid on a high-ticket items costs more money. The winning bid is the lowest bid at a price point that nobody else has put in a bid for, a.k.a the lowest unique bid. Right now one of the items being 'auctioned' is £100, and placing a bid costs £0.10.

  • Buy At A price You Like. Not strictly an auction format. An item is offered initially at an undisclosed price, starting at the list price. It costs £1,- to click on the hidden price and find out what it is. Every click that someone pays a £1,- for lowers the price by £0.30. If you like price disclosed to you for your £1,- fee, you can immediately buy the item. Else you can wait some amount of time and pay a £1,- to see what the current price is, perhaps other people have lowered it by multiple of £0.30 by paying to click and it may be at a price you like now.

There are more, but these give you the idea.

These 'auction' sites aren't for getting rid of crap in the attic. These sites only have new merchandise on offer, mostly consumer goods like iPods and cars and holidays, the kind of stuff people are told to invoke The Secret for if they want it for nearly free. And of course these aren't auctions, these are more or less raffles. Every 'bid' is just a ticket, a chance to win, and they never stop you from buying as many tickets as you want, of course not. The twist to each site is just how they select the winning ticket. The site organizers of course make out like crazy from all these little 'rights to bid' they sell. If you can get a flat screen TV whole sale for £400, then all you need is to sell 500 bids. Just make it as addictive as possible, by for example, having little stretches of time where you can feel like a winner until a new 'bid' makes you a loser, a feeling you can make go away by placing another bid.

Is this gambling? Is this illegal gambling? Depending on the laws in a country the answer to this is yes or no. I think that is why I have only seen 'auction' sites like this in the UK, I bet in the US these sites would have to be clearly marked as lotteries, and even then there would be plenty of demand for it.But well, I guess they are ok for now in the UK, so here they are: new ways to abuse the word 'auction'.

Thursday, October 23, 2008

The Mobile Web May Be The Only Web Soon: Design For It

I was browsing Nokia's website yesterday to find out more about the upcoming 5800, which these days always involves watching some piece of Flash with whatever music some marketing exec thinks is a) hip right now and thus b) I need to have blaring at me. Unfortunately I was doing this just as Nokia's web servers were having hiccups, so every time I clicked some button in the Flash app on the page, the next Flash segment could not be loaded, and I'd get a 404 File Not Found error page. And of course, I couldn't try to just reload that segment again, because the error page had wiped out the Flash application. Flash breaks the browsing model of the browser that way. I'd have to reload the page with the Flash app, and go through the intro and the music and the posing of the phone again, before I could resume where I was. I knew that the server hiccups were temporary, and that normally the Flash segments would be loaded one after the other inside the Flash player on the page and that what I was experiencing would normally not be an issue, but then I thought, man, if I was doing some mobile browsing here, this would suck.

Mobile browsing is just like dealing with a colicky web-server: often everything goes fine, but if you are on the actual move with your mobile device, you walk into or out of WiFi range, or when using the mobile phone data networks sometimes you don't get the connection, or half of the page. It just happens. The network cloud has gaps, dead spots, and fragile pipes, whichever wireless networking standard you use.

The whole set of web browsing technologies (the 'stack' as these things get called because diagrams of them always look like stacks of blocks with names of standards in them) is pretty resilient for that. TCP/IP certainly was specified to deal with faulty or bursty connections. HTTP, the standard for the web, was designed to make the web-browsing transaction really simple: ask for a page, get one, ask for the next page, get one, and if getting one fails, it is really simple to ask again. The browsers around it will show you with simple reload buttons and error messages. Network drops? Things stop, and then you can pick up again. Except for Flash, or some AJAX apps. They often rely on perfect connectivity too much.

Which was an ok assumption for a very long time; people get broadband, they are always on, when they browse the packets arrive to their homes, so hey, as long as you request a valid file, a web designer can assume it will arrive. Well maybe if they are outside of the house things fail on their tiny screened phones, but hey, make a special mobile website for that. But here's the thing: the standard Word Wide Web and the mobile web are not diverging. They are not continuing along parallel lines. They are converging. They are becoming one.

All phones before the iPhone have demonstrated that the mobile web as conceived by WAP / OMA/ iMode is mostly a stop gap, something for emergencies, not something people look forward to using if they have an alternative. The iPhone is demonstrating that many people are just fine with carrying a huge -- well, compared to a Series 40 phone or so -- piece of glass as long as they feel the usefulness trumps being tiny. Especially if you normally already carry a handbag anyway. Or a messenger bag, or a backpack, or cargo pants. And that perceived utility? Well, before it became the portable game machine to beat (Hi Sony, hi Nintendo!) there was the music, but we had regular iPods for that. So what was it? The ease of use? The keyboard? It can't be the camera, gawd. Well, I am actually seeing an awful lot of browsing out and about. A lot. I think it is one of the things of this machine that really drives those 10 million sales.

Seems people like having the web around. Like to be connected. I am seeing the same with netbooks: people who wouldn't have dreamed of shelling out a few grand for a tiny subnotebook a few years ago, deriding them as having too small screens and keyboards are snapping up netbooks, now that they have become cheap enough to be able to run Firefox at 400 dollars, and going "Wow, I can start up and browse in 4 seconds!" The problem wasn't the screens and the keyboards, it was the cost to benefit ratio. OLPCs. Big portable media player makers like Archos are putting browsers in their flagship models. We like to have the web around. The real one.

This class of Big-Glassed Mobile Devices is a new platform, and it is changing the web. I am already seeing pages being simplified to display well on this handheld platform. It's happening because the iPhones and Androids and netbooks are becoming so ubiquitous and powerful in ways Cliés and Palms weren't in their heyday. Color, processing, built-in networking, speed. The web adapts to what significant minorities of users use to browse. Smartphones are going there as well, Nokia's use of WebKit has always been about getting the 'real' web down.

But it does mean web designers have to design for this new mobility, just like they had to learn to design using web standards and not just IE optimized pages when hordes switched to Firefox and Opera. Design for mobility outside of their m.company.com or their company.mobi domains. And mobility here means not using WAP or XHTML-Mobile Profile, but just sticking to simple pages, maybe sniffing browser to send a CSS that hides or shows or re-arranges items on the page. And realizing that if your page should be suitable for 'snacking', meaning quick on and off browsing, not an immersive environment people need to spend hours in, but just another site to check and get some specifications or news from, it should degrade gracefully if the user walks into a dead spot. You know, not have the AJAX form get into some locked unrecoverable state because the right extra piece of form did not come in, and you have to reload and lose all the data you entered. Same for multi-page forms if the user can't get to the next page immediately but needs to wait to be in another Starbucks. And certainly not have Flash conk out and require the user going through your whole movie again just to get back where they were when the bus they were on gets back into network range.

Wednesday, October 22, 2008

Checking Out Android's API II

So, pretty much every 3d party application on a smartphone really needs to do two things these days to be useful: get information, and display it. In the previous post I mentioned that Google's Android has some very interesting facilities for programs to talk to the rest of the environment, getting data from the information repositories in the phone like the phone book or the GPS system or any other storage system that gets added and makes it known it has information to share. What about displaying, then? How much of a pain is it to create a visual that is up to modern standards of beautiful mobile applications?

Well, on first glance the drawing facilities look really familiar to any J2SE programmer, or anyone familiar with most other windowing toolkits: there's a Canvas object of some kind that the system gives you for your program, to execute calls on like 'draw a circle' or draw a line' and you can pass an object to describe attributes like what color to draw in, how thick the line to draw with, etc, called a Paint object.

Now, as a programmer, the minimum you need to make any kind of display is to be able to draw a single pixel in a specific color. Once you have that, you can make as elaborate a screen as you want as long as you have time to measure which pixels should be what color for text, for shapes, for the image you want to display. Of course, the more of these common items to display the API has calls for, the less a rogrammer has to make calculations and libraries for, and the quicker you can release, and the quicker your program will be as these operating system calls will use the hardware much better than a 3d party program can.

There's of course a library for widgets like radio buttons and sliders so a UI with forms can look familiar, but Android looks to have quite the interesting set of calls there for custom graphics: the Paint object understands transparency, the Canvas has full text displaying facilities including shaping text to follow a path, it understands a good set of shapes and the painting of images loaded from files, and you can add a matrix as an argument at just about every Canvas call to display an object. That last part is actually really useful because it means the system makes it easy to do things like zooming and rotating any object you want to paint, and programmers can be sure the available hardware will be used effectively for these calculations.

I was also interested to find a set of animation object libraries to be able to quickly define how specific objects should move or change shape or become more or less transparent, with helper objects to define timings. Ok, this is not a ready replacement for an Adobe's Flash ActionScript ready to go and built in, and the library looks a little sparse, but it is a good start.

All in all it is more than I had available when I was doing Swing or J2ME as little as a year ago, and it should make beautiful apps a lot less painful to make. If the system is also smart and cool enough to manage the transparency of the Canvas properly so the contents 'below' the current app can come through even if the current app doesn't know what they are, some really interesting UIs become possible.

I have to say, I am tempted to find my OS X CD and install XCode to compare this with the iPhone API. Maybe if anyone asks.

Sunday, October 19, 2008

Checking Out Android's API

So because I am still unemployedbetween UX contracts, I am exploring programming for the Google Android series smart phones. There have now been plenty of reviews of the first device to have this operating system, the G1 by T-Mobile, but I wondered what the view of this system was like from the inside, the place where application developers get to live. The interface that a programmer uses to create programs for an operating system [API] tells you so much of what the designers of the operating system really wanted or expect.

(Incidentally, admitting I did this is not the brightest idea: I have found so far the User Experience business is really skeptical about consultants who can program. I have had to really convince some prospective employers that yes, I did major in software architecture and got plenty of work in it, but it was only to get to make user interfaces, and really, I am fine leaving it behind. No, no, I do not miss it.)

And what Google obviously wants and expects is that no application on Android, even the ones that the phone maker provides, is beyond being improved and replaced by 3d party developers. Android was obviously designed to have every piece be ripped out and replaced yet still have all the other pieces work. It is one thing to say this, programmers hear this stuff all the time, but it is another to engineer a programming environment to mean it. It really looks like Google means it.

Everything is more abstract than any other phone programing environment I have dealt with. Of course, by using JAVA, the memory system is already abstracted away, and not something any programmer needs to worry about, which was already the first huge step in phone programming that J2ME gave us. (Because, seriously, was I as a programmer supposed to be better at managing memory than the latest algorithms keeping track for me? The moment the computer had the resources to manage memory, it should; programmers are too expensive to be bothered with it if they needn't.)

The J2ME paradigm is of giving the programmer a core virtual environment, and then unlocking different parts of the phone, over the years, as phone makers agree with SUN what each API for every sub-system like the phone book or the media player or the bluetooth stack needs to look like. Which means every one of the optional APIs (they call them JSRs) has the lowest common functionality, is implemented for the phone by each phone maker, and thus of course has incompatibilities.

Android has no concrete subsystems to talk to. This makes it more flexible. There is no extra library, no extension to probe and program, for some new facility. Every program, every subsystem, announces to the Android what it does by registering itself to the Android core as a data provider, or as willing to provide a user interface for a certain function to the user, or both. It registers such by using constants like 'pick a user out of the phone book' or 'know all the dates in the calendar', and programs that know what the constants are can can ask Android "I need what this constant promises", can you start it, or fetch it? Call me when it is finished." Android comes with many constants pre-defined already, but as a programmer, register aything new you want. Data is exchanged all through the system in the form of URLs that express "the 3d entry in the phonebook" or "the current location", and only at the last moment necessary should a program hand that URL to Android and ask "What is this actually?" and Android will look it up from the onwer of the information, without the program having to know what sub-system that actually is.

This means that programs need know nothing about how other systems work -- or even which system gets called -- to get results. They just make an abstract request and it gets fulfilled as long as they know the proper request strings. They can register themselves as handling other requests without needing to know how they will be called. This means that every sub-system can easily be replaced without anything breaking. The possible failure mode here is that new programs do not know the right constants other new programs want to advertise as new capabilities as an eco-system develops by 3d parties, but that is easily coordinated by Google with just a developers website of constants.

Can actually the phone-book be ripped out and replaced? Will an Android phone allow the built-in mail reader be replaced by something easier or better made by a 3d party? Certainly from the documentation there seems to be no impediment for a 3d party program to register itself as a data or UI provider for existing capabilities, but we will see what actual implementations allow. Still, this is the most flexible and future-proof environment I have touched.

I still need to explore the graphics find out how difficult it is to make something beautiful and sophisitcated appear on the screen, something the iPhone does really well by having ported the Quartz system, the all-blending, all-zooming graphics API from OS X to it. The graphics capability of the iPhone is now the bar that must be reached in this area for a new phone OS to seem credible. But I can already see that Android isn't just some new flavor of J2ME. It is quite different.

So it irks me that the job ads I see for it are all specifying you have to be a J2ME programmer to apply. It's lazy, it shows the recruiters or hiring managers do not understand how different Android is. As I have shown, getting something done with other parts of the phone is really different. Android is not as obsessed as J2ME to look constrained and small (probably because Android phone are expected to run on smartphone hardware). Good J2ME programmers must excel at programming around bugs and contingencies of running on different J2ME implementations, Android programmers will not as Google will supply the whole environment and runtime for every phone. These days somebody who specializes in Java for the desktop (J2SE) has a tough time telling employers they can do J2ME programming, and vice-versa, because while programming for the desktop programs and the phone may both use JAVA, the subsystems are so different. Well, I think Android programming really is a 3d variant, and I do not see at all why J2ME programmers should be the preferred to one to look to to switch. Just get smart people.

Monday, October 13, 2008

Twitter Admits Defeat, And I Kinda Harsh Them On It

Because my circle of friends and need for self-expression does not fall neatly along the lines of brand silos like my blogging and Twitter, I created AutoPostBot. AuroPostBot is a programs that I have running on my print and storage server here (an old Fujistu subnotebook running Win2K) that monitors my Twitter feed and reposts it, almost live, to my personal blog.

Why? Because I wanted to see what would happen. It has been instructive, but I can discuss that later. Engineering-wise, I had a choice when I made the bot: how does AutoPostBot get my Twitter entries ('tweets')? Should it check my Twitter account for new entries every reasonable amount of time ('pull the tweets') or listen to some Twitter channel to be told when a new entry happens ('get tweets pushed')?

If I want my tweets to appear on my blog nearly simultaneous as they are published on Twitter, using pull would mean having to check the feed very often, and 99.9% of the checks would not show a new tweet, I Twitter once a day or so at most. It is pretty wasteful of network resources. And for AutoPostBot to get them pushed means AutoPostBot needs to find some channel to listen to that sends updates. Well, Twitter supposedly allows you to subscribe through IM, so AutoPostBot could implement some IM chat client, sign up to Twitter to be told about updates of my Twitter feed, and repost it. Seems pretty standard, robust. Many people have implemented chat bots that open IM channels and listen. This shouldn't be too hard.

Except that Twitter has never been able to maintain a reliable IM channel, either to send your tweets, or receive update from other Twitter users from. Never worked. Always had 'outages' or worse, when Twitter would send me an IM with tweet of someone I was following when I was not at my desk, my chat client would respond with 'I am not at my desk' (very common for chat clients to do) and Twitter would think that was an update from me and post that as my tweet. Which was just dumb, I'd have all these tweets basically saying I was not at my desk. Still, that IM was a communication channel was a major bullet point for Twitter, even if it never worked over AIM every time I tried for months and months and was spotty on Jabber. I therefore had no choice but code AutoPostBot to pull my Twitter feed every minute, and it is one of the reasons I didn't open AutoPostBot up for reposting tweets from other Twitters to their other blogs: if it had become somewhat successful, AutoPostBot could no longer have been a hobby project on my 900MHz CPU Win2K box because it would have had to pull too many things too often. I wanted IM, I wanted updates to be pushed to AutoPostBot, no more network traffic than necessary.

Well, Twitter has thrown in the towel on IM. This after their SMS support has already become spotty, or non-existent in many places outside the US. Twitter wants to be this exchange of tiny snippets of information, a dialog of clipped thoughts, and its success shows there is a place and need for that. But if Twitter keeps cutting down the ways it allows itself to be accessed or the information it generates to spread, it's just gonna be another silo site of subscribers, a destination instead of pervasive service. Right now almost every phone platform has a custom client to access Twitter with -- which means it has uniquely penetrated the cellphone market -- but that is still not as integrated into mobile and chat life as SMS and IM.

What I do not understand, and Twitter will not reveal, is why this whole IM thing is so damn hard. If sexbot cam spammers can IM me on MSN 20 times a day, and since I am not that special, therefore millions of other IM users are being targeted as well, why can't Twitter get its IM act together? Not even then on the closed AIM network, but even the open-source Jabber / GoogleTalk protocol then? True to its current communication strategy about its rickety platform, Twitter will, again, say nothing more than their standard verbiage of "It's hard", "What we originally built isn't ready to be actually used by you lot", "We know you want it", "It's hard and we don't have people".

I think this communication strategy by now is failing, for the simple reason that, in the lack of actual information about the engineering challenges, it is now just coming across as whiny. Yes, start-up systems are not up for full-on production, but that's why you replace them. Twitter is not running a complex financial modeling system, Twitter is not doing intense transcoding and rescaling for displaying video streams, no, Twitter is entering 140-character messages into a database and generating very simple pages and datastreams from them. Yes, for millions of users at the same time, I acknowledge that. I am not saying Twitter is simple, but I am saying that nothing Twitter is showing to the world right now is convincing as requiring huge creative breakthroughs, just solid management and engineering.

I don't want to believe that Twitter is anything less than the usual under-resourced in the computing industry. I know about having to do a lot with very little. It's just that that is becoming harder and harder to believe since Twitter is such a Silicon-Valley darling and yet continues to keep losing so much core functionality which, from what I can see here from outside, could be done by two mid-level software engineers with a good understanding of open source IM stacks and someone with a Ph.D. or experience equivalent in distributing workloads among CPUs, in 6 months tops -- and Twitter has had serious IM issues for longer than 6 months. Either Twitter tells us what the real problems are, or they look like morons.

Monday, October 06, 2008

Goodbye Now!

Technology has created new forms of communication, and networked computer technology has exploded the number of ways we can have conversations now. One, many, writing, speaking, synchronous, asynchronous leading to forms like email, chat, VoIP, forums, webcams, instant messaging. And just like small children need to learn the mechanics of fitting into face-to-face conversations -- don't interrupt, don't whine, have a topic -- and learn how to make phone calls and how to write letters, everyone ending up on a computer has had to learn how to deal with these new avenues. To learn how to have successful interactions, but also to learn the capabilities that are simply not available in the physical world.

Filtering is a big one. In restaurants, at parties, in any gathering, it takes one loud voice, one grating continuous sound, one TV that is too loud, and maintaining a normal conversation can be difficult to just impossible. In most electronic systems, you can make irritants go away with one click somewhere. This is actually a pretty radical capability, and it is not one easily grasped as available, or desirable, by people who are new to electronic communications. We spend so much being socialized, learning how to get along in groups, dealing with and mitigating influences, not getting our asses kicked for being jerks, learning 'workplace' and 'bar' and 'home' rules and voices, that the idea none of this is necessary anymore is quite alien. You can make a voice go away, often for good, with one click. You can't really do that at the water cooler.

In fact, walking up to the water cooler at work and saying hi to Mary, Ali, and Omar, but completely ignoring Ralph, is seen as the height of insulting. Shunning should be a grave punishment for transgressing social norms, and handing it out lightly is a rude 'mean girls in High School' thing, so the idea that it can and should be implemented online is quite the mental hurdle to overcome for many people getting online. What about our shared responsibility? What about etiquette?

Yet filtering is a vital tool online, and that becomes pretty clear when the medium gets an influx of commercial messages. If Ralph stood by that water cooler hawking HerbaLife every time you walked up, you would indeed start ignoring him pretty quick. Or, more true to electronic life, if he was hawking subscriptions to pictures of 'barely legal teens'. Man how quickly wouldn't you reach for that silencer button so you could still talk to Ali.

There aren't any incentives online for 'getting along', in fact there are many visceral incentives for not getting along and finally being able to get off your chest what you had to hold in all day without getting you ass kicked. Being online can be so liberating we become pests. Well, if you don't want to spend your time online listening to Ralph and his teens and Omar talking incessantly about libtards, you need an ignore. Everyone with any experience on these systems knows this.

Which makes it so surprising that one of the oldest names in chat, one of the networks with the longest running experience in this area, did an upgrade recently that switched off filtering. Seriously. This site has been in business connecting people through chat since 1996. They're not new at this. They know about griefers, bots, political divisiveness, the way people wander in chat through topics, the way some people will repeat the same thing over and over, the way some users are just irritating, and some want and need to be utterly infuriating for their own personal reasons. Being able to filter them out is what makes people not turn away from the system in disgust -- and the maintainers of the system recently made that impossible in a system upgrade that replaced the old chat system.

This is just the surface, by the way. The whole upgrade of the system and chat site points so very clearly that a) the UX design was done by people who never used the system, or the design fell completely off the rails during implementation b) there was no big beta test. Because the results are just making everyone wonder what the hell was Gay.com thinking? Yeah, I am talking about Gay.com, and its latest upgrade.

Now, when a venerable property does something like this in a major upgrade of its site and chat system, pissing off hundreds of its users, making IMs a pain because of stability issues, making it take 30 minutes to get into your favorite chat room, having all kinds of switches and options no longer be available that made the chat rooms bearable -- like being able to switch people broadcasting the same personal ad every minute off -- and thus driving paying users to go away, you'd think C|Net or Wired would report on it. A major site that should know better is just screwing up with an unstable platform and loss of key functionality by not using best practices in developing web services, yet, not a peep.

It's just a list of things besides stability and filters: you can't have multiple private message windows open, they get tabbed. You can't have multiple rooms open, they get tabbed. For a while entry and exit messages could not be switched off in rooms and thus filled the whole screen in no time. It all smacked of nobody actually having used the systems, or nobody having watched hardcore chatters, like, say, 16 year olds, on how they use chat. But no.

Gay.com has had some "planned outages" since, fixing the worst stability issues and changing one usability problem -- the chatrooms are now no longer full of notifications of coming and going -- but the fundamental problems remain. And it was so unnecessary. Gay.com has always had an unstable, ugly, and clunky chat system, so many users flocked to 3d party chat clients. Just studying why subscribers would be so frustrated they would go through all the trouble to install another client would have taught the designers everything they needed to know, and that people didn't just install them to block ads or because the previous JAVA based system would take down whole machines to blue screen. And in this upgrade, Gay.com made sure all 3d party clients would no longer work. No work-arounds for this mess.

All the information was there. 10 minutes testing the new system by shadowing a long time subscriber would have told them the information that now must be flooding their mailboxes in user complaint screeds. Chatter tell me they have overloaded the voice mail, the help lines, you can tell that the technical team is scrambling, the blog is promising more user settings, and Gay.com is handing out free months as compensation left and right. All completely unnecessary expenses, yet Gay.com is having to make them, and some people that are leaving now will not be coming back. I don't know what the internal process was that led to such a mediocre result for a central piece of capability -- indifferent outsourcing? Cost cutting during re-implementation? A need to clear the place out of subscribers for tax reasons? -- but for being so dumb, Gay.com deserves all it gets. Which, most likely for this property that never was a huge moneymaker, will be death by loss of subscribers.

Wednesday, October 01, 2008

It's Me. No, Really, It's Me. Again.

Under every login and password field on every site that has one, there's this checkbox these days. Like on Yahoo.

checkbox under Yahoo login

I do not have a clue what it does. Well, I have some idea of what it is supposed to do. But it doesn't do it, whether I check or uncheck it, I still have to enter my password at random times. I tried to tell Yahoo about a new email address I wanted to use for a group, and I had to enter my password 4 times during the session.

Hotmail has one.

checkbox under Hotmail login

It seems to do something: when I log in and restart my browser, the mail page is open without me having to log in again. I do not use Hotmail enough to know if that is consistent behavior. But is that what either checkbox should mean? If it just pre-fills in your password, hasn't it fulfilled what it says it should do already?

Almost every site with a password has a checkbox like this, to somehow make it easier to re-log. Slashdot has a reverse one.

checkbox under Slashdot login

It does the reverse: it will log you out when you close the window or tab. Otherwise, Slashdot seems to let you be logged in forever, every time you return, which I like. The words used for this checkbox is awful, though. It barely tells you anything.

So so far, I have noticed a number of behaviors associated with these "remember me" password-checkboxes:
  • I am always logged in when I return

  • I am sometimes logged in when I return, and sometimes I am taken to the password page

  • I am always taken to the password page, and my login name is filled in

  • I am always taken to the password page, and my login and password is filled in

  • I am always take to the password page, no matter what I click or not

  • While using the service and being logged in, I get asked for my password. Sometimes repeatedly. (WTF, Yahoo? I mean, seriously.)

And in the cases where login and password are filled in, I am wondering what the point of the checkbox was, since every modern browser, even on phones, will remember the name and password for you. Which means that either this checkbox was supposed to log me in and bypass the password page and didn't work, or the programmer is doing extra work the browser could have done anyway.

Passwords on the web are hard. You have to, as a web service provider, somehow make it safe enough for your legal team to approve, easy enough for users to use, and there are a lot of conventions but few standards on what really is good enough. Users will try to log in from every terminal on the planet, mobile, kiosk, at home, work, with different levels of security and snooping, and you want to minimize fraud to keep your costs down and make users like you. The result is this hodgepodge of checkboxes that more or less work, and, oh yes, the fact that all of us web-users have about 200 passwords to keep track of, or so. Or 2000, or 20, I have no stats here.

But even 20 passwords means you, as a user maintaining your passwords, need a system. For most people the system is to fill in the same password everywhere. You are not supposed to do this, but we have no choice: our memories, varieté acts excluded, simply are not built to remember 20 completely different strings of letters and numbers of various cases, flawlessly, unless we have to enter and use them almost every day. And nobody wants to do that. The ability to do mental password management to the level that would make security teams happy has simply not been an advantageous trait long enough (40 years? 20?) for evolution to select on it. No really, it will take longer than this for us to evolve to have password super-brains. In the meantime we will do what security experts tell us not to do because it makes us unsafe and will allow people to usurp our online identity: write them down, use easy words, use an easy pattern of mixing up the same words, repeat the same passwords and patterns. And since these checkboxes are actually making us fill in those passwords less, we are less prone to remembering them and more prone to use 'easy' passwords. But without these checkboxes, who would want to use the web, having to enter their 20 random strings per day?

Oh, by the way, now try doing this on the mobile web, with its T9 or touch-the-glass keyboards with lousy ways of implementing the shift keys. Fun fun fun. Is it any wonder people opt for all number birth date passwords? Can we blame them?

I am getting fed up with more logins and passwords. I am at the point where I will order at a premium from Amazon just so that I do not have to give my identity information to a random site yet again. I feel like every time I make a new login I am increasing my chances to be snooped on or defrauded or have the site owner look at my password and think "Hmmmm, I wonder if he used that one on other sites as well..." I have more recourse if someone runs off with my credit card than if someone logs in as me and ruins my reputation through posts and comments and reviews.

I wish OpenID was used more. I switched to Disqus for my comment system here so that I could have threading, but also so because I could set up the whole thing at Disqus for this blog without having to create another password, I could log in using OpenID on Clickpass with GMail account. It's like finding PayPal on a merchant's site: you choose your stuff, click on PayPal, get taken to the PayPal site to pay, and get sent back to the merchant. PayPal is your identifier for payment at the merchant site, just like OpenID allows big name websites to be your identifier at sites that otherwise would need you to make a new password. Google, like Yahoo, LiveJournal, and AOL, are OpenID producers in that you can use your logins at those sites as a login for sites that are OpenID consumers, like Disqus and Ma.gnolia. I like that. A quick redirect, a confirmation, and I am done. Feedburner did make me make a new login and password, but once that was up I could use OpenID to have it recognize this blog as being mine. I would still have to enter my credit card number and addresses if I made a purchase -- I haven't paid for anything on a site that used OpenID for identification -- but as said, my card has good fraud protections. My passwords do not.

But isn't it unsafe, trusting your identity at one site with the security of another? Should Amazon rely on Yahoo keeping passwords safe? I think this is a non-issue. Sure, of course I have the brain that has created 2000 mixed-caps numbers-and-letters totally random non-patterned strings for use as passwords on the web, and I remember them all flawlessly going from site to site as I do without ever having written them down, honest no srsly, but I think very few other people have. Having looked at users I feel pretty confident in saying that people recycle logins and passwords so much already that the web in general is vulnerable to the fact that if you get one password for one site for Pat Webuser, you pretty much have them all. I do not know why phishers try so hard to recreate banking sites: just find a way to have people make an account for something nice you actually have and can send them, and I am sure just using the same login and password on the top 20 US banks will hit jackpot. But as said, I have no hard numbers. It's just what I think is true.

While using Yahoo's or AOL's password systems as identification across many websites will only make cracking those passwords more attractive, I'd rather trust them than the next shop that uses their own deployment of WebMerchantInABox 3000 on god knows what server where. It's why I still like finding PayPal on a site even though they are scary scum when it comes to conflict resolution: I get to pay the merchant without telling the merchant my data, and I know PayPal will do its true best to keep my payment data safe. OpenID is much like that.

Except for the little problem that there's more useful web sites trying to be OpenID producers than consumers. When I read Yahoo and Google were embracing OpenID, I was all up hoping I could get rid of at least one password, or merge the indentities. Not so. Each site will let you use their name and password at sites that are OpenID consumers like Disqus, but they aren't OpenID consumers themselves. Even LiveJournal, which has strong ties with OpenID, allows a web user only to use OpenID identification for comments, but if you want to set up a blog, or get specific privileges on other blogs on the site, you still have to set up a full LiveJournal identity. This lack of support on the consumer side is annoying -- I really would have liked to have cleaned up a lot of my identities, and it would only have made me more loyal, not less. And I would be less confused by password checkboxes.

Sunday, September 28, 2008

We're Running This By Hand?

In the final hours of negotiations, Democratic lawmakers, including Representative Rahm Emanuel of Illinois and Senator Kent Conrad of North Dakota, carried pages of the bill by hand, back and forth, from Speaker Nancy Pelosi’s office, where the Democrats were encamped, to Mr. Paulson and other Republicans in the offices of Representative John A. Boehner of Ohio, the House minority leader.

At the same time, a series of phone calls was taking place, including conversations between Ms. Pelosi and President Bush; between Mr. Paulson and the two presidential candidates, Senator John McCain and Senator Barack Obama; and between the candidates and top lawmakers.
From Breakthrough Reached in Negotiations on Bailout, New York Times, Sept 28, 2008

Am I the only one who read that and thought these people needed a software / document control and revision system like CVS or Git, a set of encoded secure irc channels, and maybe a centralized comments system?

Wednesday, September 24, 2008

The Real Market

So the T-Mobile G1 Android phone
  • has a keyboard and a flip out screen
  • has no way to synchronize anything with a user's computer
  • replicates everything to the servers of a specific vendor
  • uses a specific email address and IM system, although you can add others
  • plays media as a sort of extra function
  • requires the user to use specific headset hardware
This isn't T-Mobile competing with the iPhone. This is T-Mobile creating an upgrade path for Sidekick addicts who have outgrown going everywhere on a skateboard.

Tuesday, September 23, 2008

My Opninion On The First Android Phone


I can't really wait, you know, being a pundit and all. So I have an opinion, yes, based on the leaked pics and ad copy: the impression I had since the first leaked image of the Google Android phone being made by HTC became public is justified and, no, those first pics weren't of an ugly prototype. The Google phone T-Mobile will introduce today, built by HTC, is just plain Not Sexy. This is not going to compete with the iPhone, this box does not inspire lust, or fun, or a sense of being chic. If this little slab came to panel at America's Next Top Model, Nigel would tell it she has fallen apart over the last few weeks, Ms. Jay would have said previously during judging he had never seen anything in her anyway, Mr. Jay would say she just wasn't bringing it during shoots, whoever the celebrity model is this season would make a gruesome 'Ugh' face, and Tyra would at the end smile beatifically and tell this Beautiful Phone, after not having handed her a picture, to go home and open the pages of Crave Blog and learn something. Then a hug and off this little box with its seams and cliche rounded corners and lackluster capabilities goes.

This box is, at best, only going to appeal to people wondering if they should get a Windows Mobile device, and in that area HTC has already done better, either by looks...


Glamour shot of HTC Touch Pro, a Windows Mobile device with a keyboard


...or technology.


Action shot of HTC Touch HD, a Windows Mobile device with a 800x480 screen. Yes, 800 by 480 pixels

Just GMail and Google maps isn't it. I can get that on my 2-year-old N73.

Monday, September 22, 2008

The Platform Is A Mess

Of course, the idea of having the mobile phone operators become our bank accounts for tiny transactions that add up sometimes really seems like a bad idea. Like when you hear that somebody is on Sprint, the operator that lost hundreds of thousands, if not a million, of customers in 2006 and 2007. The fact that in many horror stories, many of them available on like Consumerist.com, Sprint seems like a billing basket case, doesn't make me think I ever want them as the keepers of my money. It just sounds like their computers never recovered from trying to incorporate Nextel. All operators screw up, but with some you just hear the same story of the same kinds of billing snafus over and over.

Or that famous time when all Verizon customer reps seemed confused about payment per kB by two orders of magnitude. And lets be clear, as the about quarterly human-interest story about somebody getting killed by data charges shows, operators are not at all interested in helping you actually manage your bill. For years now we have had to read about people getting a bill of about their yearly mortgage payments put together because they downloaded too much without noticing. Now if it was a foreign trip this is almost understandable, as I am sure the networks do not bill that to each other real-time but probably do final accounting every month, but you can still get into this kind of trouble with domestic data if you just lose track of whether your limit was 3Mb or 5Mb, and who knows when they have browsed that anyway? You try going to any operator and saying "Hey, you should warn me when I use more than ten bucks a day of data. Send me a text message or something". If any of them in the USA or UK will, this would be news to me.

In fact, I know first hand of a story of a person who had a SIM card from a respected brand MVNO in the UK (an MVNO is an operator that doesn't have its own network but leases time on another network but does its own billing) that specializes in calls to foreign countries. Pay As You Go, so you can't go wrong, right? Just put exactly the amount you want to spend on the account and you can't overspend. One day the call can't be completed, and checking the account online shows that it is over £300 in the red. How can a PAYG account be negative? By that much? The person suspects the phone was doing a data transfer, like checking email or browsing, while the person was unaware the MVNO SIM card was in the phone and not their regular one, and yes, that MVNO had atrocious data rates. But a PAYG account that lets you spend £300 over the account limit? No, that's not the billing system you want managing the mircocredit system.

Maybe we all got lucky the mobile phone operators never caught on what power they could have had by being the brokers of mini-finances on the web and mobile web. Apple and Microsoft should happily take over that role and run with it. So far their records of dealing with these billing issues is far far better.