Friday, October 31, 2008

Density Changes The Visuals

In 1996, while I was knee-deep in medical computing, one of the press releases that caught my eye was about Xerox spinning off a display company for displays with a very high pixel density, around 300 pixels per inch. Your average computer display has 72 pixels per inch, newer ones do 96. 300 pixels per inch is in the league of newspapers and the first home laserprinters to be deemed 'good enough for professional use'. This means the dots used on a computer screen are very big compared to paper, which is why a straight diagonal line on a screen looks jagged unless smoothing tricks like 'anti-aliasing' are used. You can make very detailed graphs and charts at 300 dpi, which you cannot at 72 or 96 dpi, which is why you can still cram more static information on a printed sheet than on a computer screen. So this new company was making displays comparable to paper, which they were positioning as display devices for medical use like reading electronic X-rays, or for the defense industry. Meanwhile I was facing the problem that there was no way I could put all the information contained in a single front sheet of a medical record, with its graphs and annotations, on a computer screen. Especially since 640x480 was a good screen then.


Sharp's 'FULLTOUCH 931SH' mobile phone for Japan


Well, it is ten years later, and Sharp has just announced this phone that, at 1024 x 480 in a 3"8 diagonal screen is getting very close to that magic 300dpi number. Ok, first of all, that means the graphic chip inside that box could probably get away with less aggressive anti-aliasing routines, as jagged lines look far less jagged when the dots making up the line are so small. Also, you could probably display 6 point lettering and have it still be readable.

This display is still not a reflective dislay like a piece of paper is; a display like this emits light from behind the lettering which makes it less easy to read than a well lit piece of paper. And making displays of this density is really difficult -- with every extra pixel added the chance a pixel on the screen won't work goes up -- so yields are probably very low unless you keep the displays small, so we won't have our A4 / Legal-sized screens of newspaper density any time soon. But still, this is getting really close to a properly useful handheld medical record, or handheld inventory list, or handheld visualizer of complex financial data, or just a darn good comics reader that does the artwork justice. Charts and graphs and lay-outs currently used on computer screens are just not up to using the visualization potential of these densities. It's time to look at the most intense information visualization techniques used on paper, the ones before desktop computing took off, and evolve from there, like Edward Tufte has done all through these display-oriented decades. Densities like this will also accelerate the move away from the cartoonesque user interfaces and more towards photographic realism, where objects look like their real-world counterparts, black outlines around elements are no longer necessary, and textures like water, leather, fur, and metal will be rendered so close to realistic they might actualy look good as backgrounds.

Friday, October 24, 2008

Auction, Lottery, Auction, Lottery?

We all know how eBay and its auction system took the internet world by storm. A seller puts something up for sale, set an (undisclosed) minimum price, and bidders enter what the maximum price is they are willing to pay. The computer checks all bids against the outstanding price, and if there is more than one bid the computer automatically increases each bid on the bidder's behalf by the minimum increment over and over until the bids are above all maximum prices but one, and thus only one bidder remains. Bidders who lost out are encouraged to increase their maximum price, and there's a time limit to how long the auction runs.

Since I have moved to the UK and am, as always, looking for cheap stuff here, I have found many other online auction models being used. Like:
  • No time limits on auctions: every bid makes the auction price go up by a penny and the auction extend just a little longer. This means everyone gets to bid over and over and over and over. Oh, by the way, you have to pay £1.50 or so per bid. Basically you win when everyone else but you involved thinks they have spent enough on buying bids, including people who just logged on and are now bidding fresh this minute on something you have been spending pounds and pounds to bid on for the last hour. On eBay you are only out of money when you win the auction, here you could make 100 one-penny-increment bids, thus be out of £150, and be sniped by the person who just logged in today. Are you gonna make another bid and pay £151?

  • Lowest unique bid wins. There is a time limit on how long the auction runs. Place as many bids as you like for various price points, but it costs to place a bid. The costs to place a bid differs per item, placing a bid on a high-ticket items costs more money. The winning bid is the lowest bid at a price point that nobody else has put in a bid for, a.k.a the lowest unique bid. Right now one of the items being 'auctioned' is £100, and placing a bid costs £0.10.

  • Buy At A price You Like. Not strictly an auction format. An item is offered initially at an undisclosed price, starting at the list price. It costs £1,- to click on the hidden price and find out what it is. Every click that someone pays a £1,- for lowers the price by £0.30. If you like price disclosed to you for your £1,- fee, you can immediately buy the item. Else you can wait some amount of time and pay a £1,- to see what the current price is, perhaps other people have lowered it by multiple of £0.30 by paying to click and it may be at a price you like now.

There are more, but these give you the idea.

These 'auction' sites aren't for getting rid of crap in the attic. These sites only have new merchandise on offer, mostly consumer goods like iPods and cars and holidays, the kind of stuff people are told to invoke The Secret for if they want it for nearly free. And of course these aren't auctions, these are more or less raffles. Every 'bid' is just a ticket, a chance to win, and they never stop you from buying as many tickets as you want, of course not. The twist to each site is just how they select the winning ticket. The site organizers of course make out like crazy from all these little 'rights to bid' they sell. If you can get a flat screen TV whole sale for £400, then all you need is to sell 500 bids. Just make it as addictive as possible, by for example, having little stretches of time where you can feel like a winner until a new 'bid' makes you a loser, a feeling you can make go away by placing another bid.

Is this gambling? Is this illegal gambling? Depending on the laws in a country the answer to this is yes or no. I think that is why I have only seen 'auction' sites like this in the UK, I bet in the US these sites would have to be clearly marked as lotteries, and even then there would be plenty of demand for it.But well, I guess they are ok for now in the UK, so here they are: new ways to abuse the word 'auction'.

Thursday, October 23, 2008

The Mobile Web May Be The Only Web Soon: Design For It

I was browsing Nokia's website yesterday to find out more about the upcoming 5800, which these days always involves watching some piece of Flash with whatever music some marketing exec thinks is a) hip right now and thus b) I need to have blaring at me. Unfortunately I was doing this just as Nokia's web servers were having hiccups, so every time I clicked some button in the Flash app on the page, the next Flash segment could not be loaded, and I'd get a 404 File Not Found error page. And of course, I couldn't try to just reload that segment again, because the error page had wiped out the Flash application. Flash breaks the browsing model of the browser that way. I'd have to reload the page with the Flash app, and go through the intro and the music and the posing of the phone again, before I could resume where I was. I knew that the server hiccups were temporary, and that normally the Flash segments would be loaded one after the other inside the Flash player on the page and that what I was experiencing would normally not be an issue, but then I thought, man, if I was doing some mobile browsing here, this would suck.

Mobile browsing is just like dealing with a colicky web-server: often everything goes fine, but if you are on the actual move with your mobile device, you walk into or out of WiFi range, or when using the mobile phone data networks sometimes you don't get the connection, or half of the page. It just happens. The network cloud has gaps, dead spots, and fragile pipes, whichever wireless networking standard you use.

The whole set of web browsing technologies (the 'stack' as these things get called because diagrams of them always look like stacks of blocks with names of standards in them) is pretty resilient for that. TCP/IP certainly was specified to deal with faulty or bursty connections. HTTP, the standard for the web, was designed to make the web-browsing transaction really simple: ask for a page, get one, ask for the next page, get one, and if getting one fails, it is really simple to ask again. The browsers around it will show you with simple reload buttons and error messages. Network drops? Things stop, and then you can pick up again. Except for Flash, or some AJAX apps. They often rely on perfect connectivity too much.

Which was an ok assumption for a very long time; people get broadband, they are always on, when they browse the packets arrive to their homes, so hey, as long as you request a valid file, a web designer can assume it will arrive. Well maybe if they are outside of the house things fail on their tiny screened phones, but hey, make a special mobile website for that. But here's the thing: the standard Word Wide Web and the mobile web are not diverging. They are not continuing along parallel lines. They are converging. They are becoming one.

All phones before the iPhone have demonstrated that the mobile web as conceived by WAP / OMA/ iMode is mostly a stop gap, something for emergencies, not something people look forward to using if they have an alternative. The iPhone is demonstrating that many people are just fine with carrying a huge -- well, compared to a Series 40 phone or so -- piece of glass as long as they feel the usefulness trumps being tiny. Especially if you normally already carry a handbag anyway. Or a messenger bag, or a backpack, or cargo pants. And that perceived utility? Well, before it became the portable game machine to beat (Hi Sony, hi Nintendo!) there was the music, but we had regular iPods for that. So what was it? The ease of use? The keyboard? It can't be the camera, gawd. Well, I am actually seeing an awful lot of browsing out and about. A lot. I think it is one of the things of this machine that really drives those 10 million sales.

Seems people like having the web around. Like to be connected. I am seeing the same with netbooks: people who wouldn't have dreamed of shelling out a few grand for a tiny subnotebook a few years ago, deriding them as having too small screens and keyboards are snapping up netbooks, now that they have become cheap enough to be able to run Firefox at 400 dollars, and going "Wow, I can start up and browse in 4 seconds!" The problem wasn't the screens and the keyboards, it was the cost to benefit ratio. OLPCs. Big portable media player makers like Archos are putting browsers in their flagship models. We like to have the web around. The real one.

This class of Big-Glassed Mobile Devices is a new platform, and it is changing the web. I am already seeing pages being simplified to display well on this handheld platform. It's happening because the iPhones and Androids and netbooks are becoming so ubiquitous and powerful in ways Cliés and Palms weren't in their heyday. Color, processing, built-in networking, speed. The web adapts to what significant minorities of users use to browse. Smartphones are going there as well, Nokia's use of WebKit has always been about getting the 'real' web down.

But it does mean web designers have to design for this new mobility, just like they had to learn to design using web standards and not just IE optimized pages when hordes switched to Firefox and Opera. Design for mobility outside of their m.company.com or their company.mobi domains. And mobility here means not using WAP or XHTML-Mobile Profile, but just sticking to simple pages, maybe sniffing browser to send a CSS that hides or shows or re-arranges items on the page. And realizing that if your page should be suitable for 'snacking', meaning quick on and off browsing, not an immersive environment people need to spend hours in, but just another site to check and get some specifications or news from, it should degrade gracefully if the user walks into a dead spot. You know, not have the AJAX form get into some locked unrecoverable state because the right extra piece of form did not come in, and you have to reload and lose all the data you entered. Same for multi-page forms if the user can't get to the next page immediately but needs to wait to be in another Starbucks. And certainly not have Flash conk out and require the user going through your whole movie again just to get back where they were when the bus they were on gets back into network range.

Wednesday, October 22, 2008

Checking Out Android's API II

So, pretty much every 3d party application on a smartphone really needs to do two things these days to be useful: get information, and display it. In the previous post I mentioned that Google's Android has some very interesting facilities for programs to talk to the rest of the environment, getting data from the information repositories in the phone like the phone book or the GPS system or any other storage system that gets added and makes it known it has information to share. What about displaying, then? How much of a pain is it to create a visual that is up to modern standards of beautiful mobile applications?

Well, on first glance the drawing facilities look really familiar to any J2SE programmer, or anyone familiar with most other windowing toolkits: there's a Canvas object of some kind that the system gives you for your program, to execute calls on like 'draw a circle' or draw a line' and you can pass an object to describe attributes like what color to draw in, how thick the line to draw with, etc, called a Paint object.

Now, as a programmer, the minimum you need to make any kind of display is to be able to draw a single pixel in a specific color. Once you have that, you can make as elaborate a screen as you want as long as you have time to measure which pixels should be what color for text, for shapes, for the image you want to display. Of course, the more of these common items to display the API has calls for, the less a rogrammer has to make calculations and libraries for, and the quicker you can release, and the quicker your program will be as these operating system calls will use the hardware much better than a 3d party program can.

There's of course a library for widgets like radio buttons and sliders so a UI with forms can look familiar, but Android looks to have quite the interesting set of calls there for custom graphics: the Paint object understands transparency, the Canvas has full text displaying facilities including shaping text to follow a path, it understands a good set of shapes and the painting of images loaded from files, and you can add a matrix as an argument at just about every Canvas call to display an object. That last part is actually really useful because it means the system makes it easy to do things like zooming and rotating any object you want to paint, and programmers can be sure the available hardware will be used effectively for these calculations.

I was also interested to find a set of animation object libraries to be able to quickly define how specific objects should move or change shape or become more or less transparent, with helper objects to define timings. Ok, this is not a ready replacement for an Adobe's Flash ActionScript ready to go and built in, and the library looks a little sparse, but it is a good start.

All in all it is more than I had available when I was doing Swing or J2ME as little as a year ago, and it should make beautiful apps a lot less painful to make. If the system is also smart and cool enough to manage the transparency of the Canvas properly so the contents 'below' the current app can come through even if the current app doesn't know what they are, some really interesting UIs become possible.

I have to say, I am tempted to find my OS X CD and install XCode to compare this with the iPhone API. Maybe if anyone asks.

Sunday, October 19, 2008

Checking Out Android's API

So because I am still unemployedbetween UX contracts, I am exploring programming for the Google Android series smart phones. There have now been plenty of reviews of the first device to have this operating system, the G1 by T-Mobile, but I wondered what the view of this system was like from the inside, the place where application developers get to live. The interface that a programmer uses to create programs for an operating system [API] tells you so much of what the designers of the operating system really wanted or expect.

(Incidentally, admitting I did this is not the brightest idea: I have found so far the User Experience business is really skeptical about consultants who can program. I have had to really convince some prospective employers that yes, I did major in software architecture and got plenty of work in it, but it was only to get to make user interfaces, and really, I am fine leaving it behind. No, no, I do not miss it.)

And what Google obviously wants and expects is that no application on Android, even the ones that the phone maker provides, is beyond being improved and replaced by 3d party developers. Android was obviously designed to have every piece be ripped out and replaced yet still have all the other pieces work. It is one thing to say this, programmers hear this stuff all the time, but it is another to engineer a programming environment to mean it. It really looks like Google means it.

Everything is more abstract than any other phone programing environment I have dealt with. Of course, by using JAVA, the memory system is already abstracted away, and not something any programmer needs to worry about, which was already the first huge step in phone programming that J2ME gave us. (Because, seriously, was I as a programmer supposed to be better at managing memory than the latest algorithms keeping track for me? The moment the computer had the resources to manage memory, it should; programmers are too expensive to be bothered with it if they needn't.)

The J2ME paradigm is of giving the programmer a core virtual environment, and then unlocking different parts of the phone, over the years, as phone makers agree with SUN what each API for every sub-system like the phone book or the media player or the bluetooth stack needs to look like. Which means every one of the optional APIs (they call them JSRs) has the lowest common functionality, is implemented for the phone by each phone maker, and thus of course has incompatibilities.

Android has no concrete subsystems to talk to. This makes it more flexible. There is no extra library, no extension to probe and program, for some new facility. Every program, every subsystem, announces to the Android what it does by registering itself to the Android core as a data provider, or as willing to provide a user interface for a certain function to the user, or both. It registers such by using constants like 'pick a user out of the phone book' or 'know all the dates in the calendar', and programs that know what the constants are can can ask Android "I need what this constant promises", can you start it, or fetch it? Call me when it is finished." Android comes with many constants pre-defined already, but as a programmer, register aything new you want. Data is exchanged all through the system in the form of URLs that express "the 3d entry in the phonebook" or "the current location", and only at the last moment necessary should a program hand that URL to Android and ask "What is this actually?" and Android will look it up from the onwer of the information, without the program having to know what sub-system that actually is.

This means that programs need know nothing about how other systems work -- or even which system gets called -- to get results. They just make an abstract request and it gets fulfilled as long as they know the proper request strings. They can register themselves as handling other requests without needing to know how they will be called. This means that every sub-system can easily be replaced without anything breaking. The possible failure mode here is that new programs do not know the right constants other new programs want to advertise as new capabilities as an eco-system develops by 3d parties, but that is easily coordinated by Google with just a developers website of constants.

Can actually the phone-book be ripped out and replaced? Will an Android phone allow the built-in mail reader be replaced by something easier or better made by a 3d party? Certainly from the documentation there seems to be no impediment for a 3d party program to register itself as a data or UI provider for existing capabilities, but we will see what actual implementations allow. Still, this is the most flexible and future-proof environment I have touched.

I still need to explore the graphics find out how difficult it is to make something beautiful and sophisitcated appear on the screen, something the iPhone does really well by having ported the Quartz system, the all-blending, all-zooming graphics API from OS X to it. The graphics capability of the iPhone is now the bar that must be reached in this area for a new phone OS to seem credible. But I can already see that Android isn't just some new flavor of J2ME. It is quite different.

So it irks me that the job ads I see for it are all specifying you have to be a J2ME programmer to apply. It's lazy, it shows the recruiters or hiring managers do not understand how different Android is. As I have shown, getting something done with other parts of the phone is really different. Android is not as obsessed as J2ME to look constrained and small (probably because Android phone are expected to run on smartphone hardware). Good J2ME programmers must excel at programming around bugs and contingencies of running on different J2ME implementations, Android programmers will not as Google will supply the whole environment and runtime for every phone. These days somebody who specializes in Java for the desktop (J2SE) has a tough time telling employers they can do J2ME programming, and vice-versa, because while programming for the desktop programs and the phone may both use JAVA, the subsystems are so different. Well, I think Android programming really is a 3d variant, and I do not see at all why J2ME programmers should be the preferred to one to look to to switch. Just get smart people.

Monday, October 13, 2008

Twitter Admits Defeat, And I Kinda Harsh Them On It

Because my circle of friends and need for self-expression does not fall neatly along the lines of brand silos like my blogging and Twitter, I created AutoPostBot. AuroPostBot is a programs that I have running on my print and storage server here (an old Fujistu subnotebook running Win2K) that monitors my Twitter feed and reposts it, almost live, to my personal blog.

Why? Because I wanted to see what would happen. It has been instructive, but I can discuss that later. Engineering-wise, I had a choice when I made the bot: how does AutoPostBot get my Twitter entries ('tweets')? Should it check my Twitter account for new entries every reasonable amount of time ('pull the tweets') or listen to some Twitter channel to be told when a new entry happens ('get tweets pushed')?

If I want my tweets to appear on my blog nearly simultaneous as they are published on Twitter, using pull would mean having to check the feed very often, and 99.9% of the checks would not show a new tweet, I Twitter once a day or so at most. It is pretty wasteful of network resources. And for AutoPostBot to get them pushed means AutoPostBot needs to find some channel to listen to that sends updates. Well, Twitter supposedly allows you to subscribe through IM, so AutoPostBot could implement some IM chat client, sign up to Twitter to be told about updates of my Twitter feed, and repost it. Seems pretty standard, robust. Many people have implemented chat bots that open IM channels and listen. This shouldn't be too hard.

Except that Twitter has never been able to maintain a reliable IM channel, either to send your tweets, or receive update from other Twitter users from. Never worked. Always had 'outages' or worse, when Twitter would send me an IM with tweet of someone I was following when I was not at my desk, my chat client would respond with 'I am not at my desk' (very common for chat clients to do) and Twitter would think that was an update from me and post that as my tweet. Which was just dumb, I'd have all these tweets basically saying I was not at my desk. Still, that IM was a communication channel was a major bullet point for Twitter, even if it never worked over AIM every time I tried for months and months and was spotty on Jabber. I therefore had no choice but code AutoPostBot to pull my Twitter feed every minute, and it is one of the reasons I didn't open AutoPostBot up for reposting tweets from other Twitters to their other blogs: if it had become somewhat successful, AutoPostBot could no longer have been a hobby project on my 900MHz CPU Win2K box because it would have had to pull too many things too often. I wanted IM, I wanted updates to be pushed to AutoPostBot, no more network traffic than necessary.

Well, Twitter has thrown in the towel on IM. This after their SMS support has already become spotty, or non-existent in many places outside the US. Twitter wants to be this exchange of tiny snippets of information, a dialog of clipped thoughts, and its success shows there is a place and need for that. But if Twitter keeps cutting down the ways it allows itself to be accessed or the information it generates to spread, it's just gonna be another silo site of subscribers, a destination instead of pervasive service. Right now almost every phone platform has a custom client to access Twitter with -- which means it has uniquely penetrated the cellphone market -- but that is still not as integrated into mobile and chat life as SMS and IM.

What I do not understand, and Twitter will not reveal, is why this whole IM thing is so damn hard. If sexbot cam spammers can IM me on MSN 20 times a day, and since I am not that special, therefore millions of other IM users are being targeted as well, why can't Twitter get its IM act together? Not even then on the closed AIM network, but even the open-source Jabber / GoogleTalk protocol then? True to its current communication strategy about its rickety platform, Twitter will, again, say nothing more than their standard verbiage of "It's hard", "What we originally built isn't ready to be actually used by you lot", "We know you want it", "It's hard and we don't have people".

I think this communication strategy by now is failing, for the simple reason that, in the lack of actual information about the engineering challenges, it is now just coming across as whiny. Yes, start-up systems are not up for full-on production, but that's why you replace them. Twitter is not running a complex financial modeling system, Twitter is not doing intense transcoding and rescaling for displaying video streams, no, Twitter is entering 140-character messages into a database and generating very simple pages and datastreams from them. Yes, for millions of users at the same time, I acknowledge that. I am not saying Twitter is simple, but I am saying that nothing Twitter is showing to the world right now is convincing as requiring huge creative breakthroughs, just solid management and engineering.

I don't want to believe that Twitter is anything less than the usual under-resourced in the computing industry. I know about having to do a lot with very little. It's just that that is becoming harder and harder to believe since Twitter is such a Silicon-Valley darling and yet continues to keep losing so much core functionality which, from what I can see here from outside, could be done by two mid-level software engineers with a good understanding of open source IM stacks and someone with a Ph.D. or experience equivalent in distributing workloads among CPUs, in 6 months tops -- and Twitter has had serious IM issues for longer than 6 months. Either Twitter tells us what the real problems are, or they look like morons.

Monday, October 06, 2008

Goodbye Now!

Technology has created new forms of communication, and networked computer technology has exploded the number of ways we can have conversations now. One, many, writing, speaking, synchronous, asynchronous leading to forms like email, chat, VoIP, forums, webcams, instant messaging. And just like small children need to learn the mechanics of fitting into face-to-face conversations -- don't interrupt, don't whine, have a topic -- and learn how to make phone calls and how to write letters, everyone ending up on a computer has had to learn how to deal with these new avenues. To learn how to have successful interactions, but also to learn the capabilities that are simply not available in the physical world.

Filtering is a big one. In restaurants, at parties, in any gathering, it takes one loud voice, one grating continuous sound, one TV that is too loud, and maintaining a normal conversation can be difficult to just impossible. In most electronic systems, you can make irritants go away with one click somewhere. This is actually a pretty radical capability, and it is not one easily grasped as available, or desirable, by people who are new to electronic communications. We spend so much being socialized, learning how to get along in groups, dealing with and mitigating influences, not getting our asses kicked for being jerks, learning 'workplace' and 'bar' and 'home' rules and voices, that the idea none of this is necessary anymore is quite alien. You can make a voice go away, often for good, with one click. You can't really do that at the water cooler.

In fact, walking up to the water cooler at work and saying hi to Mary, Ali, and Omar, but completely ignoring Ralph, is seen as the height of insulting. Shunning should be a grave punishment for transgressing social norms, and handing it out lightly is a rude 'mean girls in High School' thing, so the idea that it can and should be implemented online is quite the mental hurdle to overcome for many people getting online. What about our shared responsibility? What about etiquette?

Yet filtering is a vital tool online, and that becomes pretty clear when the medium gets an influx of commercial messages. If Ralph stood by that water cooler hawking HerbaLife every time you walked up, you would indeed start ignoring him pretty quick. Or, more true to electronic life, if he was hawking subscriptions to pictures of 'barely legal teens'. Man how quickly wouldn't you reach for that silencer button so you could still talk to Ali.

There aren't any incentives online for 'getting along', in fact there are many visceral incentives for not getting along and finally being able to get off your chest what you had to hold in all day without getting you ass kicked. Being online can be so liberating we become pests. Well, if you don't want to spend your time online listening to Ralph and his teens and Omar talking incessantly about libtards, you need an ignore. Everyone with any experience on these systems knows this.

Which makes it so surprising that one of the oldest names in chat, one of the networks with the longest running experience in this area, did an upgrade recently that switched off filtering. Seriously. This site has been in business connecting people through chat since 1996. They're not new at this. They know about griefers, bots, political divisiveness, the way people wander in chat through topics, the way some people will repeat the same thing over and over, the way some users are just irritating, and some want and need to be utterly infuriating for their own personal reasons. Being able to filter them out is what makes people not turn away from the system in disgust -- and the maintainers of the system recently made that impossible in a system upgrade that replaced the old chat system.

This is just the surface, by the way. The whole upgrade of the system and chat site points so very clearly that a) the UX design was done by people who never used the system, or the design fell completely off the rails during implementation b) there was no big beta test. Because the results are just making everyone wonder what the hell was Gay.com thinking? Yeah, I am talking about Gay.com, and its latest upgrade.

Now, when a venerable property does something like this in a major upgrade of its site and chat system, pissing off hundreds of its users, making IMs a pain because of stability issues, making it take 30 minutes to get into your favorite chat room, having all kinds of switches and options no longer be available that made the chat rooms bearable -- like being able to switch people broadcasting the same personal ad every minute off -- and thus driving paying users to go away, you'd think C|Net or Wired would report on it. A major site that should know better is just screwing up with an unstable platform and loss of key functionality by not using best practices in developing web services, yet, not a peep.

It's just a list of things besides stability and filters: you can't have multiple private message windows open, they get tabbed. You can't have multiple rooms open, they get tabbed. For a while entry and exit messages could not be switched off in rooms and thus filled the whole screen in no time. It all smacked of nobody actually having used the systems, or nobody having watched hardcore chatters, like, say, 16 year olds, on how they use chat. But no.

Gay.com has had some "planned outages" since, fixing the worst stability issues and changing one usability problem -- the chatrooms are now no longer full of notifications of coming and going -- but the fundamental problems remain. And it was so unnecessary. Gay.com has always had an unstable, ugly, and clunky chat system, so many users flocked to 3d party chat clients. Just studying why subscribers would be so frustrated they would go through all the trouble to install another client would have taught the designers everything they needed to know, and that people didn't just install them to block ads or because the previous JAVA based system would take down whole machines to blue screen. And in this upgrade, Gay.com made sure all 3d party clients would no longer work. No work-arounds for this mess.

All the information was there. 10 minutes testing the new system by shadowing a long time subscriber would have told them the information that now must be flooding their mailboxes in user complaint screeds. Chatter tell me they have overloaded the voice mail, the help lines, you can tell that the technical team is scrambling, the blog is promising more user settings, and Gay.com is handing out free months as compensation left and right. All completely unnecessary expenses, yet Gay.com is having to make them, and some people that are leaving now will not be coming back. I don't know what the internal process was that led to such a mediocre result for a central piece of capability -- indifferent outsourcing? Cost cutting during re-implementation? A need to clear the place out of subscribers for tax reasons? -- but for being so dumb, Gay.com deserves all it gets. Which, most likely for this property that never was a huge moneymaker, will be death by loss of subscribers.

Wednesday, October 01, 2008

It's Me. No, Really, It's Me. Again.

Under every login and password field on every site that has one, there's this checkbox these days. Like on Yahoo.

checkbox under Yahoo login

I do not have a clue what it does. Well, I have some idea of what it is supposed to do. But it doesn't do it, whether I check or uncheck it, I still have to enter my password at random times. I tried to tell Yahoo about a new email address I wanted to use for a group, and I had to enter my password 4 times during the session.

Hotmail has one.

checkbox under Hotmail login

It seems to do something: when I log in and restart my browser, the mail page is open without me having to log in again. I do not use Hotmail enough to know if that is consistent behavior. But is that what either checkbox should mean? If it just pre-fills in your password, hasn't it fulfilled what it says it should do already?

Almost every site with a password has a checkbox like this, to somehow make it easier to re-log. Slashdot has a reverse one.

checkbox under Slashdot login

It does the reverse: it will log you out when you close the window or tab. Otherwise, Slashdot seems to let you be logged in forever, every time you return, which I like. The words used for this checkbox is awful, though. It barely tells you anything.

So so far, I have noticed a number of behaviors associated with these "remember me" password-checkboxes:
  • I am always logged in when I return

  • I am sometimes logged in when I return, and sometimes I am taken to the password page

  • I am always taken to the password page, and my login name is filled in

  • I am always taken to the password page, and my login and password is filled in

  • I am always take to the password page, no matter what I click or not

  • While using the service and being logged in, I get asked for my password. Sometimes repeatedly. (WTF, Yahoo? I mean, seriously.)

And in the cases where login and password are filled in, I am wondering what the point of the checkbox was, since every modern browser, even on phones, will remember the name and password for you. Which means that either this checkbox was supposed to log me in and bypass the password page and didn't work, or the programmer is doing extra work the browser could have done anyway.

Passwords on the web are hard. You have to, as a web service provider, somehow make it safe enough for your legal team to approve, easy enough for users to use, and there are a lot of conventions but few standards on what really is good enough. Users will try to log in from every terminal on the planet, mobile, kiosk, at home, work, with different levels of security and snooping, and you want to minimize fraud to keep your costs down and make users like you. The result is this hodgepodge of checkboxes that more or less work, and, oh yes, the fact that all of us web-users have about 200 passwords to keep track of, or so. Or 2000, or 20, I have no stats here.

But even 20 passwords means you, as a user maintaining your passwords, need a system. For most people the system is to fill in the same password everywhere. You are not supposed to do this, but we have no choice: our memories, varieté acts excluded, simply are not built to remember 20 completely different strings of letters and numbers of various cases, flawlessly, unless we have to enter and use them almost every day. And nobody wants to do that. The ability to do mental password management to the level that would make security teams happy has simply not been an advantageous trait long enough (40 years? 20?) for evolution to select on it. No really, it will take longer than this for us to evolve to have password super-brains. In the meantime we will do what security experts tell us not to do because it makes us unsafe and will allow people to usurp our online identity: write them down, use easy words, use an easy pattern of mixing up the same words, repeat the same passwords and patterns. And since these checkboxes are actually making us fill in those passwords less, we are less prone to remembering them and more prone to use 'easy' passwords. But without these checkboxes, who would want to use the web, having to enter their 20 random strings per day?

Oh, by the way, now try doing this on the mobile web, with its T9 or touch-the-glass keyboards with lousy ways of implementing the shift keys. Fun fun fun. Is it any wonder people opt for all number birth date passwords? Can we blame them?

I am getting fed up with more logins and passwords. I am at the point where I will order at a premium from Amazon just so that I do not have to give my identity information to a random site yet again. I feel like every time I make a new login I am increasing my chances to be snooped on or defrauded or have the site owner look at my password and think "Hmmmm, I wonder if he used that one on other sites as well..." I have more recourse if someone runs off with my credit card than if someone logs in as me and ruins my reputation through posts and comments and reviews.

I wish OpenID was used more. I switched to Disqus for my comment system here so that I could have threading, but also so because I could set up the whole thing at Disqus for this blog without having to create another password, I could log in using OpenID on Clickpass with GMail account. It's like finding PayPal on a merchant's site: you choose your stuff, click on PayPal, get taken to the PayPal site to pay, and get sent back to the merchant. PayPal is your identifier for payment at the merchant site, just like OpenID allows big name websites to be your identifier at sites that otherwise would need you to make a new password. Google, like Yahoo, LiveJournal, and AOL, are OpenID producers in that you can use your logins at those sites as a login for sites that are OpenID consumers, like Disqus and Ma.gnolia. I like that. A quick redirect, a confirmation, and I am done. Feedburner did make me make a new login and password, but once that was up I could use OpenID to have it recognize this blog as being mine. I would still have to enter my credit card number and addresses if I made a purchase -- I haven't paid for anything on a site that used OpenID for identification -- but as said, my card has good fraud protections. My passwords do not.

But isn't it unsafe, trusting your identity at one site with the security of another? Should Amazon rely on Yahoo keeping passwords safe? I think this is a non-issue. Sure, of course I have the brain that has created 2000 mixed-caps numbers-and-letters totally random non-patterned strings for use as passwords on the web, and I remember them all flawlessly going from site to site as I do without ever having written them down, honest no srsly, but I think very few other people have. Having looked at users I feel pretty confident in saying that people recycle logins and passwords so much already that the web in general is vulnerable to the fact that if you get one password for one site for Pat Webuser, you pretty much have them all. I do not know why phishers try so hard to recreate banking sites: just find a way to have people make an account for something nice you actually have and can send them, and I am sure just using the same login and password on the top 20 US banks will hit jackpot. But as said, I have no hard numbers. It's just what I think is true.

While using Yahoo's or AOL's password systems as identification across many websites will only make cracking those passwords more attractive, I'd rather trust them than the next shop that uses their own deployment of WebMerchantInABox 3000 on god knows what server where. It's why I still like finding PayPal on a site even though they are scary scum when it comes to conflict resolution: I get to pay the merchant without telling the merchant my data, and I know PayPal will do its true best to keep my payment data safe. OpenID is much like that.

Except for the little problem that there's more useful web sites trying to be OpenID producers than consumers. When I read Yahoo and Google were embracing OpenID, I was all up hoping I could get rid of at least one password, or merge the indentities. Not so. Each site will let you use their name and password at sites that are OpenID consumers like Disqus, but they aren't OpenID consumers themselves. Even LiveJournal, which has strong ties with OpenID, allows a web user only to use OpenID identification for comments, but if you want to set up a blog, or get specific privileges on other blogs on the site, you still have to set up a full LiveJournal identity. This lack of support on the consumer side is annoying -- I really would have liked to have cleaned up a lot of my identities, and it would only have made me more loyal, not less. And I would be less confused by password checkboxes.