pop art

C E Macfarlane c.e.macfarlane at macfh.co.uk
Mon Aug 24 06:33:08 PDT 2015


See below for further OT discussion, otherwise please ignore ...

www.macfh.co.uk/CEMH.html

>     FWIW and IMHO The problem is that the BBC pages are generated
>     by people
>     who have no clue about using simple basic HTML and take for
>     granted what
>     browsers. etc, people will use. e.g. for ages their top pages
>     gave me the
>     "mobile" version which was a mess on my screen.
>
>     I can't tell if this was because the browser declaring an odd OS on an
>     ARM-based system was miseading them, or a simply lack of care
>     that some
>     people might be using browsers that didn't handle the current
>     bells and
>     whistles. Or some 'clever' method like sending a 'quick first
>     pass page'
>     then using something like JavaScript to give a better page -
>     which duly
>     wasn't working with a browser that either doesn't incorporate
>     JavaScript
>     or only has a 'lite' version of its methods.
>
>     Their aim, I assume, is to make the BBC site as jazzy and
>     trendy looking as
>     possible and to use all the latest whizz-wheels to impress.
>     Alas, that has
>     it's price.
>
>     So I've largely got out of the habit trying to read BBC pages using
>     RO/NetSurf. Instead I use Linux/FireFox which is usually
>     fine.

What is needed in web-design is KIS (Keep It Simple), not a desire to show
off that you can use complex techniques that only modern browsers can
handle.  The above problems can be summed up in two points:
	:-(	Lack of KIS leading to a preference for trendiness of design rather
than simple, basic funtionality
	:-(	Lack of proper testing over a variety of platforms/browsers/bandwidths

TBF, the BBC is such a vast organisation that there can be no question of
designing the entire layout of individual pages by hand, as Jim and I can do
on our sites, they have to use some sort of Content Management System (CMS),
and it may be that many of the first type of problem originate in that.
However, there is no such excuse for the second problem.

A quick and simple measure of the likely viability of a site is how many
validation errors are produced by sample pages from it, and, again trying to
be fair, these days the BBC count is well below many, perhaps most, other
equivalently-sized organisations.  Nevertheless some do come to light.

BBC Examples:

https://validator.w3.org/nu/?doc=http%3A%2F%2Fwww.bbc.co.uk%2Fprogrammes%2Fa
-z
5 Warnings - warnings are not usually a problem, so Programmes A-Z is a
'clean' page

https://validator.w3.org/nu/?doc=http%3A%2F%2Fwww.bbc.co.uk%2Fevents%2Fr89mx
j%2Fby%2Fdate%2F2015%2F08%2F24
5 Warnings, 4 Errors.

https://validator.w3.org/nu/?doc=http%3A%2F%2Fwww.bbc.co.uk%2Fprogrammes%2Fb
01ckmgg
6 Warnings, 4 Errors.

https://validator.w3.org/nu/?doc=http%3A%2F%2Fwww.bbc.co.uk%2Fiplayer
17 Warnings  -  iPlayer 'Home' page is clean

http://www.bbc.co.uk/iplayer/episode/p02v03p1/big-blue-live-episode-1
17 Warnings, 1 Error.

By contrast:

https://validator.w3.org/nu/?doc=http%3A%2F%2Fwww.microsoft.com%2Fen-us%2F
733 Warnings and errors  -  shame on Microsoft :-(

But of course validation is only a guide, albeit a goodish one, to what
really matters, which is how the site behaves in people's browsers, and,
besides the problems that Jim has highlighted, there are consistent reports
of accessibility and speed of loading problems with the BBC site.

>     Pardon the further digression but it gives me a chance to comment on
>     something related which others may have encountered...
>
>     A while ago I was collating info on the 'history' of the BBC
>     iplayer. This
>     meant I went back and trawled though the old BBC 'web logs',
>     etc, which the
>     developers, etc, posted to. For this I *did* use RO/NetSurf
>     as it lets me
>     export the results in a form convenient for my purposes. I
>     found many pages
>     which were hidden because links had broken, etc.

Certainly there are tools that can search for broken links, and maybe ones
that can search for hidden pages.  There is an example of the former here:
http://htmlhelp.com/links/validators.htm

>	Others where the page
>     'code' was also broken in ways that some browsers hide - I
>     guess because
>     the generating software has been used often enought that the
>     larger browser
>     teams were able to deal with it.

Shouldn't happen if, as ought, the results are run through a validator.

>     Similarly I found some time ago that there are at least *two*
>     versions of
>     the iplayer's schedules day-page listings. One of which only
>     goes back a
>     week, while the other covers the full month.

I would have thought that both would be useful.

>     It all seems weird until you realise that the BBC pages are written by
>     different people/teams at different times, and they may not
>     bother to talk
>     to each other or use the same tools/methods. And then some of
>     the results
>     may become embedded in use.

Obviously I can't pontificate on what happens within the BBC website team,
though the number of changes of direction over time would seem to hint at a
lack of strategic coherence, possibly caused or exacerbated by changes in
management or other staff.  Certainly there are problems in managing any
large process, the more especially if also you have a high churn of staff,
but good, stable management should at least be able to get the basics right
and steer the ship on a steady, consistent course over time.

>     'black text on black backgrounds'. Cool eh? 8-]

Again, this is lack of testing.

>     Anyone who looks at my own webpages will realise how dumb and
>     simple the
>     HTML is. I know it can look childish and make me look
>     out-of-touch, but I
>     prefer that approach. OK, I *am* a fossil. I remember the
>     days when one
>     of the big fundamentals of HTML was that the 'mark up' was to allow
>     the *browser* to decide how to render from 'logical' markup
>     code. So each
>     user saw a layout that suited *them* as an individual.

That *should* still be the case.  In principle, for most pages, a browser on
a laptop or PC should be be able to use the same HTML page code as that on a
mobile.  The site designer shouldn't be trying to second-guess the
characteristics of the device and alter content accordingly, rather they
should be trying to keep their code simple enough to work on any device.
That's easier said than done of course  -  even my own site does some,
though very little, browser-sniffing  -  but the aim should always be to
keep such non-standard techniques to an absolute minimum, and never to use
them where functionality is critical.

>     Not the 'modern' <sic>  the-layout-you-see-is-dictated-by someone-else
>     which people now often take for granted.

Again, adequate testing would show up the weaknesses in the
layout-control-freak's approach.

>     I recall the days when someone I know described the use of
>     nested tables to
>     force layout as "gratuitous cruelty to browswers". ;-> Alas,
>     things have
>     got worse from my apparently-luddite POV.

I confess I have a minority of pages that still do this  -  they contain
such things as mathematics and it was the only way I could get the
derivations and proofs laid out as I was used to seeing on paper.  Needless
to say, whereas most of my site 'just works' on a mobile, the pages, such as
those, where I have tried to 'force' the layout are the ones that look least
good on a mobile!

>     Rant over. Apologies to anyone irritated by the irrelevance
>     or views. :-)

Ditto.




More information about the get_iplayer mailing list