Showing posts with label javascript. Show all posts
Showing posts with label javascript. Show all posts

Thursday, May 17, 2007

Playing Catchup

I seem to have got out of the blogging habit, so I'm hoping to catch up on a few posts now. I'll tweak the dates so they're relevent to the events roughly as they happened (chronology? what's that?!)

The first event I'd like to make a post about was the excellent -

Web Standards Group Meeting on Javascript

Some of us shy away from JavaScript (until recently, myself included) on the grounds that it's not accessible. But these days, if it's done right, it can be positively beneficial to accessibility.

Demystifying Screen Readers - Steve Faulkner
Steve is very knowledgable on screen readers and all their foibles, and is Director of the Web Accessibility Tools Consortium. This talk mainly centred around Jaws (65%) and Window Eyes (35%). The bracketed figures are from a US National Federation of the Blind market share survey - it's obvious these are the two big players.

The key issues revolve around:

  • Dynamic updates - user initiated and independent
    Can the user access the updated content?
    Is the user aware that the content has been updated?
  • Rich Internet Applications (RIA)
    Can the user understand the role of the control?
    Can the user successfully interact with the control?
    Is the user able to access information about the current state of the control?
He then explained the differences in screen reader modes:
  • Browse Mode (virtual buffer) - the user can navigate page content via paragraphs, headings, links, lists etc. They can also activate links and some form controls. But text characters can't be input into form fields, or interact with select elements in this mode.
  • Forms Mode (browse mode off) - the user may only navigate through a document to focusable elements via the TAB key. Text access is limited to "read all" functionality. Most of advanced content navigation is unavailable.
The crucial question we have to consider is, when and how does content become available to the user after it's been updated in the browser?

[Steve Faulkner and the Latency Issue]

Latency is a problem because the virtual buffer does not update and the user doesn't know anything has changed. However, JAWS v7.1 started "listening" for virtual buffer updates in response to things like:
  • window.setInverval()
  • object.innerText (for IE)
  • object.textContent and object.appendChild (in Firefox)
  • changes in form control values
  • And other stuff like ALT or TITLE attribute value changes.
Jez Lemon has an excellent article on Improving Ajax Applications For JAWS Users on his webiste. Steve summed up with some recommendations:
  • Do not code to accommodate the poor support shown by JAWS and Window Eyes.
  • Use unobtrusive methods where available and appropriate, to help screen readers along.
  • Don't use the excuse that JavaScript / Ajax is not accessible for screen readers to not bother to design for accessibility.
  • Start developing interface elements that use WAI-ARIA specs, which will provide some benefits now and many more in the future.
Steve's thought-provoking presentation was followed by a turn from Christian Heilmann entitled Seven Reasons For Code Bloat

[Christain's been on the beanz again]

His notes are available for download from his blog, so I won't repeat them verbatim. Needless to say, it was a fun presentation and contained the obligatory photo of a kitten ;-). Meanwhile, he's thinking of this as the title of his next book:

[Christian's Next Book?]

PubStandards XVIII
Of course, the next item on the social agenda was the PubStandards gathering. Lots of fun and revelry as usual, here's one photo, but you can see more on Flickr.

[Patrick & Ashe go head-to-head, while Ross butts in the middle]

Saturday, November 18, 2006

Playpen #6 - sIFR Headlines

I've been meaning to experiment with sIFR headline styling ever since hearing Dave Shea's Fine Typography On The Web piece during the @media 2006 conference. I've finally got a demo going at playpen #6.

What does sIFR mean?
sIFR stands for Scalable Inline Flash Replacement, and is an unobtrusive JavaScript/Flash solution for providing lovely fonts on your site (eg for headlines) whilst still remaining accessible, and not relying on that font being installed on a user's machine. Read more about the techniqute by visitng the official sIFR Wiki/Documentation site. H1 and H2 headings are best restyled using sIFR, rather than large bodies of text. If a browser does not have JavaScript enabled, the headlines will just be styled by the regular CSS definitions, so it degrades gracefully.

Why Bother?
There are several techniques for image replacement. The Gilder/Levin method is one such (see Dave Shea's article which explains some of the others too). Gilder/Levin is recognised as one of the best from an accessibility standpoint. But the down side, is that you have to manually generate each graphic used to replace your text, plus add a specific CSS rule for each in your stylesheet. That's all very well if you have a smallish, static site, and not many headings to replace. But what about database-driven sites and blogs, where you don't know in advance what the text will be which needs replacing? The only practical way to go is sIFR under these circumstances.

Where Can I Get It?
More information and a download for the code can be found at Mike Davidson's sIFR page.

Where Is It Used?
Keep an eye out for any sites which use unusual typography for headings or recurrent elements. If this is a database-driven site (such as ecommerce or blog), the chances are, sIFR will be the method that's used. Two likely candidates off the top of my head are:

Wednesday, September 27, 2006

Playpen #3 - Changing Your Stripes

You know what they say about Leopards... well at least you can get a table to change its stripes with a bit of DOM scripting.

It's a fairly trivial problem, but seeing as I'm pretty green when it comes to unobtrusive JavaScript, it's somewhere to start!

The Playpen #3 page shows off the table, which has a new class added on alternate rows, and defines a new background colour in the CSS. OnMouseOver will change the class again, to give another colour. But I'm having real trouble resetting the original class/colour onMouseOut... It's probably because the DOM is changed on the fly, and the original (not moused over) state of the alternate row is never actually "stored" on the page. If anyone has any suggestions, I'd be very interested to hear.

For the record, my stripeTables script looks like this:

function stripeTables() {
if (!document.getElementsByTagName) return false;
var tables = document.getElementsByTagName("table");
for (var i=0; i<tables.length; i++) {
var odd = false;
var rows = tables[i].getElementsByTagName("tr");
for (var j=0; j<rows.length; j++) {
if (odd == true) {
addClass(rows[j],"altrow");
odd = false;
} else {
odd = true;
}
}
}
}

addLoadEvent(stripeTables);

And this is highlightRows:

function highlightRows() {
if(!document.getElementsByTagName) return false;
var rows = document.getElementsByTagName("tr");
for (var i=0; i<rows.length; i++) {
var rowclass = rows[i].getAttribute("class");
rows[i].onmouseover = function() {
addClass(this,"highlight");
}
rows[i].onmouseout = function() {
this.setAttribute("class" , "rowclass[i]");
}
}
}
addLoadEvent(highlightRows);
I thought getting the Class attribute and storing as rowclass would allow me to reset it to what it was before the onMouseOver event, but sadly the table rows become unstripey once they are moused over!

The only other way I can think of doing it is writing some sort of subtractClass script to complement addClass, but seeing as this will almost certainly involve hideous regular expressions, I'm rather shying away from that.

Anyone have any ideas what I'm doing wrong?

Saturday, September 23, 2006

Playpen #2 - Lightbox JS

The d.Construct Backnewtwork has a neat feature which hooks into the Flickr API and pulls out all suitably-tagged images of the conference. Then when you click an image, it appears in a rather sexy overlay window.

I've been looking for some time, for an unobtrusive javascript method of displaying a photo + caption in a popup, as I have several sites which require this feature, without needing to go for the overhead of dynamic pages or a page per image. Trouble is, most of the methods I've found haven't been friendly if you turn off JavaScript!

The latest issue of .Net magazine (#154) also has a tutorial on Lightboxes (Javascript Image gallery widgets), so I thought I'd give Lightbox JS a try.

It works great straight out of the box, is dead easy to inplement, and will let you customise quite a few features. If users have Javascript disabled, they still get to see the content (the larger image when you click on the thumbnail, albeit in a boring vanilla window), so it's fine from an accessibility standpoint. And I'm pretty sure it's the very same method the backnetwork uses.

I knocked a quick gallery together, which you can see at the Playpen #2 page.

Tuesday, September 19, 2006

Further Reading

There was a dangerously-tempting bookstall at d.Construct the other week, and I found myself buying two books which have been on my To Be Read list for a while:

Beginning JavaScript with DOM Scripting and AJAX - Christian Heilmann, Apress
I think it will be an excellent companion for the DOM Scripting book I've already read by Jeremy Keith. Will do a proper review when I've read this in more depth.

Blog Design Solutions - Andy Budd et al, Friends of ED
Great advice for customising your blog. Not just in terms of look and feel, but also advice on hosting your own blog, setting up testing environments, databases etc. I hope to give this blog a "lick of paint" in the near future!

I've also recently finished Dan Cederholm's excellent book:
Bulletproof Web Design - Dan Cederholm, New Riders
This one is a must-read for anybody seriously contemplating standards-based web design. Dan takes common table-based solutions (which can still be seen in the wild), explains why they are not bulletproof, and then reworks the solution in a standards-based way. I was very impressed with the session he did for @media in June, and this takes things even further. A great reference for bulletproof techniques.

Tuesday, July 18, 2006

D Is For DOM and d.Construct

Inspired by Christian Heilmann's presentation on DOM Scripting last week (he made JavaScript sound fun, for heavens' sake!), I thought I would try and get my head round the concept. I'm much more familiar with CSS and tend to cower in the corner at the thought of writing any Javascript. So I thought I would buy a book. Well, I actually went into Borders looking for a newbie's guide to PHP but came out with Jeremy Keith''s DOM Scripting: Web Design with JavaScript and the Document Object Model (Friends of ED). How did that happen?

Talking of Jeremy and his friends at Clear:left, I have bought my ticket for the 2006 d.Construct meeting in Brighton on 8th September. It promises to be a good event, and I thought I might make a weekend of it and see a bit of Brighton while I'm at it (or is that just so I can recover from the post-con hangover??).

Saturday, July 15, 2006

WSG London #1

The first London meeting of the Web Standards Group took place last night in North London, and was well very well attended with 190 people turning up to hear speakers Andy Budd on Who Cares About Standards? and Christian Heilmann speaking about Maintainable JavaScript.

Christian was very animated and went quite quickly, and since JavaScript is not really my forte, unfortunately I found it was easier to just listen than trying to scribble notes as well. His slides are available via his blog. But I was able to take notes during Andy's talk, a precis of which appears below.

Who Cares About Web Standards?
This was the rather controversial opening salvo from Andy! He began by giving us a brief history of standards - not just web, but standards in general. From one of the earliest in 1120 when King Henry I defined the L unit of measure (the length of his arm!) through the inventions of the wooden screw by the Romans, and the subsequent standardisation of screws and other machined parts by Sir Joseph Whitworth in 1841, when the Industrial Revolution was in full flow.

Whitworth was in charge of Babbage's Works where the first mechanical computer, the Difference Engine, was made. By 1860, Whitworth's screws had become the de faco standard, certainly in the UK. Meanwhile in the US, William Sellers proposed a different standard (sounds familiar?!) to help build the railroads. This was all fine until towards the end of the second World War, when the US was supplying England with a lot of spares for machinery and the war effort - and they were having to make two version of everything. Eventually the UK capitulated and the US standard became the very first official standard for anything. Now there are over 800,000.

Why bother?
When implemented, standards should:

  • Ease communications and inter-operability. Buy a new DVD player and plug it into your TV and it should work.
  • Make life easier. You can buy a toaster safe in the knowledge that its plug will fit the sockets in your walls.
  • Be a measure of quality, or level of expertise, a mark of professionalism.
  • Ensure safety and durability.
There are different types of "standard" - Official, de facto, (non regulated but ubiquitous), open, proprietory. When standards work well, you tend not to think about them.

What's this got to do with the web?
During the Browser Wars, the languages such as HTML and CSS were produced and expanded by the browser manufacturers. By pushing their own "standards" they set out to monopolise. When the W3C came together and put together language recommendations (they are still not standards!), and developers put pressure on the browser manufacturers to support them coherently. All modern browsers support the W3C recommendations - some just do it better than others! The term "web standards" was coined by Jeffrey Zeldman and the WaSP project.

The Philosophy Behind "Web Standards"
The aim is to separate content from presentation and behaviour, using (X)HTML, CSS and JavaScript in the appropriate fashion to produce quality code and semantically correct documents.

Benefits
  • Communication - easier to hand over to other developers (or come back to yourself in six months' time)
  • Inter-operability - more accessible, forwards-compatible, multiple device support for phones, PDAs, text readers, microformats etc
  • Make life easier - code can be more easily maintained
  • Safety & durability - code less likely to "break" and should last longer
  • Guarantees a level of expertise - proves you are resonably proficient as a developer and should help eradicate the FrontPage Cowboys ;-)
  • Mark of professionalism - you will stand out from the crowd
Things Aren't Perfect
Standards-complient pages not necessarily load up faster - the number of packet (file) requests can slow things up, so if you have 2 or 3 CSS files associated with a page, there can be a bigger "up front" hit on speed when a visitor first comes to your site, although subsequent pages may well be quicker to load. There is the benefit of less code bloat without all those <table> and <font> tags though.

Huge CSS files can be very difficult to maintain, especially when the full consequences of the cascade are taken into account. Presentation is still tied to content (to a much lesser extent) as the CSS/layout you choose is often influenced by the code order of the document itself. It's much better than it was.

Full CSS Layout is less than ideal at times. Floats are really a buggy hack, but it's the best we've got. Browser implementations for things like <fieldset> and <legend> are still inconsistant (handles padding and margin differently). Advances in XHTML and CSS are beginning to stall. When is CSS3 due? XHTML isn't great for marking up applications as opposed to static documents, or for microformats.

Are standards becoming irrelevent?
Almost reached a tipping point where "everyone" is doing it - so why should we keep going on about it? Development using the standards should be a no-brainer - why do it any other way? Besides, most clients don't care as long as the job gets done, so just do it that way and don't go overboard in advertising the fact. A couple of lines in your proposal documentation (to the effect that "we will use the appropriate web standards" is sufficient).

What now?
The focus needs to change more towards:
  • Accessibility/Useability
  • User Experience
  • Design
  • Branding
  • Client and user goals
Andy's slides can be downloaded from his website.

Geeky Prize Comeptition!
To tie in with Andy's fixation with screws, here's a little bit of fun...
Despite the US standard for screws taking off after WWII, you can still find a ¼-Whitworth screw/thread in common usage today. I will award a pack of 4 hand-made greeting cards of your choice to the first person who can tell me where.

Oh lord, I've just spotted myself in the audience shots which Christian uploaded to Flickr.

Friday, June 16, 2006

@media, Beyond A Code Audit

Accessibility is about more than just making sure you have passed the code audit points in WCAG, said Robin Christopherson of abilitynet.org.uk

Useful Test Tools/Tips:
Home Page Reader (text only) - cheaper than JAWS.
Natural Language - if you switch language mid-paragraph, use the Inline Language Tag to flag it so that screenreaders know how to correctly interpret the change for pronunciation.
Always define background and foreground colours by default, you can never assume they will be black and white (say).
textaloud.com is a text-to-speach engine.

Problems with FLASH
Dynamic content is difficult to make accessible in Flash. For instance, ALT text should exactly match the words shown in an image or Voice Recognition Software doesn't work properly. Flash breaks the Say What You See rule.
The Flash Accessibility presentation at macromedia.com is totally aural with no text transcript => breaks one rule of accessibility straight away! No tool tips on any of the Flash buttons. Dynamic content requires a screen refresh otherwise screen readers are not aware that content has changed (one of the big problems with some emerging Web2.0 apps). Presentation is low contrast - which would probably fail the Vischeck criteria.
Flash content is never as accessible as HTML (and by inference to search engines also). Robin (blind) did not find it easy to use Flash presentations as he was never sure if he was missing something. Also for partially sighted people, you can't change the font style or colours in a Flash movie.

Problems with Javascript
JS can be a nightmare when its used to provide main functionality for a site - it's sometimes not enough just to produce a no-JS version of a page if you're relying on it to accomplish something fundamental. Non-JS versions of a page need to be flagged as available as Home Page Reader is a plugin to the IE engine, and therefore the user agent could well have JS enabled.
Using a Javascript routine to update the time displayed on a page every second can cause havoc - the screen reader has barely had time to read out the nav links before the page is declared to have been refreshed and it all starts again from the top - you get stuck in an infinite loop and can never get out!

Problems with Colour / Whitespace / Text Alignment
Never use colour solely to convey meaning - The Tube Map fails the colourblind tests!
People with congnitive disabilitities can have big problems if enough white space is not provided - don't crowd things together. Use the tags to define abbreviations, plus a decent-sized sans-serif font.
Folks with Dyslexia have trouble reading fully justified text due to the uneven word spacing - left justified text is much better for them.

Assistive Technologies and their special requirements:
Headset mouse emulator - fine control of movement can be limited - don't make links too close together like lastminute.com. Google's Next page links have good spacing.
On-screen keybord - takes up valuable screen real estate
Suck/Blow tube for left/right mouse click - again, can make movements clumsier than with a mouse alone
Screen Magnifiers - disney.com was a nightmare to view with a magnified screen because of the lack of proper TAB order - the cursor went all over the place, often out of the immediate viewport of the magnified screen. Users get lost.
Screen readers - IE7 is not good with these at present.
No Mouse - tab order is vital - take a look at the Vatican's site to see what a screwed up tab order can do :-(
Also, Tab Focus can be tricky to distinguish - some sort of obvious highlights work best..
Skip Nav Links - when they get focus they can have unhide set on them with CSS which is quite neat.