Sunday, February 25, 2018

Haskell with Reactive Banana and GTK3

I've been doing some GUI coding recently using a combination of Reactive Banana and GTK3. I started out with just GTK3, but I could see it wasn't going to scale because everything GTK3 does is in the IO monad. I found I was having to create IORefs to track the application state, and then pass these around for reading and writing by various event handlers. While the application was small this was manageable, but I could see that it was going to grow into a pile of imperative spaghetti as time went on.

I knew about functional reactive programming (FRP), and went on the hunt for a framework that would work with GTK3. I chose Reactive Banana despite the silly name because it seemed to be targeted at desktop GUI applications rather than games and simulations.

Connecting to GTK3

FRP is based around two key abstractions:

  • Events are instants in time that carry some data. When the user clicks on a button in a GUI you want an event to denote the fact that the button was clicked. If the user moves a slider then you want an event with the new position of the slider.

  • Behaviors carry data that changes over time. In theory this change can be continuous; for instance if you simulate a bouncing ball then the position of the ball is a behavior; at any point in time you can query it, and queries at different times will get different answers. However Reactive Banana only supports behaviors that change in response to events and remain constant the rest of the time. Thats fine: my application responds to user events and doesn't need continuous changes. Take the slider event I mentioned above: when the user moves the slider you want to update a sliderPosition behavior with the latest position so that other parts of the program can then use the value later on.

In Reactive Banana you can convert an event into a behavior with the "stepper" function. You can also get an event back when a behavior changes

Behaviors are a lot like the IORefs I was already using, but events are what make the whole thing scalable. In a large application you may want several things to happen when the user clicks something, but without the Event abstraction all of those things have to directly associated. This harms modularity because it creates dependencies between modules that own callbacks and modules that own widgets, and also causes uncertainty within callbacks about which other callbacks might already have been invoked. With FRP the widget creator can just return an Event without needing to know who receives it, and behaviours are not updated until all events have been executed.

There is already a binding between Reactive Banana and WxHaskell, but nothing for GTK. So my first job was to figure this out. Fortunately it turned out to be very simple. Every widget in GTK3 has three key lists of things in its API:

  • IO functions. These are used to create widgets, and also to get and set various parameters. So for instance the slider widget has functions like this (I'm glossing over some typeclass stuff here. For now just take it that a slider has type Range):

       rangeGetValue :: Range -> IO Double
       rangeSetValue :: Range -> Double -> IO ()

  • Attributes. These are kind of like lenses on the widget, in that they let you both read and write a value. However unlike Haskell lenses this only works in the IO monad. So for instance the slider widget has an attribute:

       rangeValue :: Attr Range Double

    You can access the attributes of a widget using the get and set functions. This is equivalent to using the two IO functions above.

  • Signals. These are hooks where you can attach a callback to a widget using the on function. A callback is an IO monad action which is invoked whenever the signal is triggered. This is usually when the user does something, but can also be when the program does something. For instance the slider widget has a signal

       valueChanged :: Signal Range (IO ())
    The last argument is the type of the callback. In this case it takes no parameters and returns no value, so you can hook into it like this:
       on mySlider valueChanged $ do
          v <- rangeGetValue mySlider

One subtlety about GTK3 signals is that they are often only triggered when the underlying value actually changes, rather than every time the underlying setter function is called. So if the slider is on 10 and you call "rangeSetValue 9" then the callback is triggered in exactly the same way as when the user moves it. However if you call "rangeSetValue 10" then the callback is not triggered. This lets you cross-connect widgets without creating endless loops.

Connecting GUI Inputs

The crucial thing is that GTK signals and attributes are isomorphic with Reactive Banana events and behaviors. So the following code gets you quite a long way:

   registerIOSignal :: (MonadIO m) =>
      -> Signal object (m a)
      -> m (a, b)
      -> MomentIO (Event b)
   registerIOSignal obj sig act = do
      (event, runHandlers)
      liftIO $ obj `on` sig $ do
         (r, v) <- act
         liftIO $ runHandlers v
         return r
      return event

There are a few wrinkles that this has to cope with:

First, a few signal handlers expect the callback to return something other than "()". Hence the "a" type parameter above.

Second, the callback doesn't usually get any arguments, such as the current slider position. Its up to the callback itself to get whatever information it needs. Hence you still need to write some callback code.

Third, some signals work in monads other than just "IO". Usually these are of the form "ReaderT IO" (that is, IO plus some read-only context). The "m" type parameter allows for this.

So now we can get a Reactive Banana event for the slider like this:

   sliderEvent <- registerIOSignal mySlider valueChanged $ do
      v <- rangeGetValue mySlider
      return ((), v)

The two values in the "return" are the return value for the callback (which is just () in this case) and the value we want to send out in the Event.

Some signals do provide parameters directly to the callback, so you need a family of functions like this:

   registerIOSignal1 :: (MonadIO m) =>
      -> Signal object (a -> m b)
      -> (a -> m (b, c))
      -> MomentIO (Event c)
   registerIOSignal2 :: (MonadIO m) =>
      -> Signal object (a -> b -> m c)
      -> (a -> b -> m (c, d))
      -> MomentIO (Event d)

And so on up to registerIOSignal4, which is the longest one I have needed so far.

Connecting Outputs

Outputs are simpler than inputs. Reactive Banana provides a function for linking an event to an IO action:

 reactimate :: Event (IO ()) -> MomentIO ()

This takes an event carrying IO actions and executes those actions as they arrive. The "MomentIO" return value is the monad used for building up networks of events and behaviors: more of that in "Plumbing" below.

Events are functors, so the usual pattern for using reactimate looks like this:

   reportThis :: Event String -> MomentIO ()
   reportThis ev = do
      let ioEvent = fmap putStrLn ev
      reactimate ioEvent

The argument is an event carrying a string. This is converted into an event carrying IO actions using "fmap", and the result is then passed to reactimate. Obviously this can be reduced to a single line but I've split it out here to make things clearer.

So we can link an event to a GTK attribute like this:

   eventLink :: object -> Attr object a -> Event a -> MomentIO ()
   eventLink obj attr =
      reactimate . fmap (\v -> set obj [attr := v])

Whenever the argument event fires the attribute will be updated with the value carried by the event.

Behaviors can be linked in the same way. Reactive Banana provides the "changes" function to get an event whenever a behavior might have changed. However this doesn't quite work the way you would expect. The type is:

   changes :: Behavior a -> MomentIO (Event (Future a))

The "Future" type reflects the fact that a behavior only changes after the event processing has finished. This lets you write code that cross-links events and behaviors without creating endless loops, but it means you have to be careful when accessing the current value of a behavior. More about this in "Plumbing" below.

To cope with these "Future" values there is a special version of "reactimate" called "reactimate' " (note the tick mark). You use it like this:

   behaviorLink :: object -> Attr object a -> Behavior a -> MomentIO ()
   behaviorLink obj attr bhv = do
      fe <- changes bhv
      reactimate' $ fmap (fmap (\v -> set obj [attr := v])) fe

This will update the attribute whenever an event occurs which feeds in to the behavior. Note that this will still happen even if the new value is the same as the old; unlike GTK the Reactive Banana library doesn't cancel updates if the new and old values are the same.


The Basic Concepts

Reactive Banana events and behaviors are connected together in the MomentIO monad. This is an instance of MonadFix so you can use recursive do notation, letting you create feedback loops between behaviors and events. MomentIO is also an instance of MonadIO, so you can use liftIO to bring GTK widget actions into it.

To set up a dialog containing a bunch of GTK widgets you do the following things in the MomentIO monad:

  • Use liftIO on GTK functions to set up the widgets and arrange them in layout boxes in the same way you would if you were just using bare GTK.

  • Use registerIOSignal to get Reactive Banana events from the widgets.

  • Use the Reactive Banana combinators to create new events and behaviors reflecting the application logic you want.

  • Use eventLink and behaviorLink to update widget attributes.

 For instance you can have a pop-up dialog containing a bunch of input widgets with events attached to their values. Lets say these fields are arguments to the "FooData" constructor, and you also have a function "fooValid :: FooData -> Bool". You can then write your code like this:

   fooDialog :: FooData -> MomentIO (Widget, Event FooData)
   fooDialog (FooData v1 v2)= do

      -- Create GTK widgets w1, w2 and okButton.
      -- Put them all in a top-level "dialog" container.
      .... your GTK code here.

      -- Connect events ev1, ev2 to the values of w1 and w2.
      .... your calls to registerIOSignal here.
      okClick <- registerIOSignal okButton buttonActivate $ return ((), ())
      bhv1 <- stepper v1 ev1
      bhv2 <- stepper v2 ev2
         fooB = FooData <$> bhv1 <*> bhv2
               -- Behavior is an Applicative, so fooB :: Behavior FooData
         validInput = fooValid <$> fooB
         result = const <$> fooB <@> okClick

      behaviorLink okButton widgetSensitive validInput
      return (dialog, result)

The last line but one links the "validInput" behavior to the OK button sensitivity. So if the input data is not valid then the OK button is greyed out and will not respond to clicks. You can use the same technique do other more informative things like highlighting the offending widget or displaying a helpful  message in another widget.

The "result =" line needs a bit of explanation. The "<@>" combinator in Reactive Banana works like the applicative "<*>" except that its second argument is an event rather than a behavior. The result is an event that combines the current value of the "fooB" behavior with the value from the "okClick" event. In this case the button click carries no information, so we use the function "const" to just take the current behavior value.

One tip: when writing pure functions that are going to be called using the "<@>" combinator its a good idea to put the argument that will come from the event last.

Keeping it Modular

I have found that these are good rules for designing applications in Reactive Banana:

  • Functions in the MomentIO monad should take events and behaviors as parameters, but only return events.

  • Avoid returning a behavior unless you are sure that this is the only function that needs to change it.

  • Keep the behavior definitions at the top of the call stack where they are used. If in doubt, defer the job of defining the behavior to your caller.

This lets you write independent modules that all have a say in updates to some shared value. The shared value should be represented as a Behavior, and updates to it as Events which either carry new values or (better) update functions. So you have a bunch of independent editor functions and a top level function which looks like this:

   editor1, editor2, editor3 ::
    Behavior FooData -> MomentIO (Event (FooData -> FooData))

   fooDataManager :: FooData -> MomentIO ()
   fooDataManager start = mdo    -- Recursive do notation.
      edit1 <- editor1 fooB
      edit2 <- editor2 fooB
      edit3 <- editor3 fooB
      fooB <- accumB start $ unions [edit1, edit2, edit3]
     -- accumB applies each event function to the accumulated behaviour value. 

Saturday, December 3, 2016

What duties to software developers owe to users?

I was reading this blog post, entitled "The code I’m still ashamed of". 

TL; DR: back in 2000 the poster, Bill Sourour, was employed to write a web questionnaire aimed at teenage girls that purported to advise the user about their need for a particular drug. In reality unless you said you were allergic to it, the questionnaire always concluded that the user needed the drug. Shortly after, Sourour read about a teenage girl who had possibly committed suicide due to side effects of this drug. He is still troubled by this.

Nothing the poster or his employer did was illegal. It may not even have been unethical, depending on exactly which set of professional ethics you subscribe to. But it seems clear to me that there is something wrong in a program that purports to provide impartial advice while actually trying to trick you into buying medication you don't need. Bill Sourour clearly agrees.

Out in meatspace we have a clearly defined set of rules for this kind of situation. Details vary between countries, but if you consult someone about legal, financial or medical matters then they are generally held to have a "fiduciary duty" to you. The term derives from the Latin for "faithful". If X has a fiduciary duty to Y, then X is bound at all times to act in the best interests of Y. In such a case X is said to be "the fiduciary" while Y is the "beneficiary".

In many cases fiduciary duties arise in clearly defined contexts and have clear bodies of law or other rules associated with them. If you are the director of a company then you have a fiduciary duty to the shareholders, and most jurisdictions have a specific law for that case. But courts can also find fiduciary duties in other circumstances. In English law the general principle is as follows:
"A fiduciary is someone who has undertaken to act for and on behalf of another in a particular matter in circumstances which give rise to a relationship of trust and confidence."
It seems clear to me that this describes precisely the relationship between a software developer and a user. The user is not in a position to create the program they require, so they use one developed by someone else. The program acts as directed by the developer, but on behalf of the user. The user has to trust that the program will do what it promises, and in many cases the program will have access to confidential information which could be disclosed to others against the user's wishes.

These are not theoretical concerns. "Malware" is a very common category of software, defined as:
any software used to disrupt computer or mobile operations, gather sensitive information, gain access to private computer systems, or display unwanted advertising.
Sometimes malware is illicitly introduced by hacking, but in many cases the user is induced to run the malware by promises that it will do something that the user wants. In that case, software that acts against the interests of the user is an abuse of the trust placed in the developer by the user. In particular, the potential for software to "gather sensitive information" and "gain access to private computer systems" clearly shows that the user must have a "relationship of trust and confidence" with the developer, even if they have never met.

One argument against my thesis came up when I posted a question about this to Legal forum on Stack Exchange. The answer I got from Dale M argued that:

Engineers (including software engineers) do not have this [relationship of confidence] and AFAIK a fiduciary duty between an engineer and their client has never been found, even where the work is a one-on-one commission.
I agree that, unlike a software developer, all current examples of a fiduciary duty involve a relationship in which the fiduciary is acting directly. The fiduciary has immediate knowledge of the circumstances of the particular beneficiary, and decides from moment to moment to take actions that may or may not be in the beneficiary's best interest. In contrast a software developer is separated in time from the user, and may have little or no knowledge of the user's situation.

I didn't argue with Dale M because Stack Exchange is for questions and answers, not debates. However I don't think that the distinction drawn by Dale M holds for software. An engineer designing a bridge is not in a position to learn the private information of those who cross the bridge, but a software engineer is often in a position to learn a great deal about the users of their product. It seems to me that this leads inescapably to the conclusion that software engineers do have a relationship of confidence with the user, and that this therefore creates a fiduciary duty.

Of course, as Dale M points out, nobody has ever persuaded a judge that software developers owe a fiduciary duty, and its likely that in practice its going to be a hard sell. But to go back to the example at the top, I think that Bill Sourer, or his employer, did owe a fiduciary duty to those people who ran the questionnaire software he wrote, because they disclosed private information in the expectation of getting honest advice, and the fact that they disclosed it to a program instead of a human makes no difference at all.

Addendum: Scope of duty

This section looks at exactly what the scope of the fiduciary duty is. It doesn't fit within the main text of this essay, so I've put it here.

Fortunately there is no need for a change in the law regarding fiduciary duty. The existence of a fiduciary duty is based on the nature of the relationship between principal and agent, although in some countries specific cases such as company directors are covered by more detailed laws.

First it is necessary to determine exactly who the fiduciary is. So far I have talked about "the software developer", but in practice software is rarely written by a single individual. We have to look at the authority that is directing the effort and deciding what functions will be implemented. If the software is produced by a company then treating the company as the fiduciary would seem to be the best approach, although it might be more appropriate to hold a senior manager liable if they have exceeded their authority.

As for the scope, I'm going to consider the scope of the fiduciary duty imposed on company directors and consider whether an analogous duty should apply to a software developer:

  • Duty of care: for directors this is the duty to inform themselves and take due thought before making a decision.  One might argue that a software developer should have a similar duty of care when writing software, but this is already handled through normal negligence. Elevating the application of normal professional skill to a fiduciary duty is not going to make life better for the users. However there is one area where this might be applied: lack of motive to produce secure software is widely recognised as a significant problem, and is also an area where the "confidence" aspect of fiduciary duty overlaps with a duty of care. Therefore developers who negligently fail to consider security aspects of their software should be considered to have failed in their fiduciary duty.
  • Duty of loyalty: for directors this is the duty not to use their position to further their private interests. For a software developer this is straightforward: the developer should not use their privileged access to the user's computer to further their private interests. So downloading information from the users computer (unless the user explicitly instructs this to happen) should be a breach of fiduciary duty. So would using the processing power or bandwidth owned by the user for the developers own purposes, for instance by mining bitcoins or sending spam.
  • Duty of good faith: the developer should write code that will advance the user's interests and act in accordance with the user's wishes at all times.
  • Duty of confidentiality: if the developer is entrusted with user information, for example because the software interfaces with cloud storage, then this should be held as confidential and not disclosed for the developer's benefit.
  • Duty of prudence: This does not map onto software development.
  • Duty of disclosure: for a director this providing all relevant information to the shareholders. For a software developer, it means completely and honestly documenting what the software does, and particularly drawing attention to any features which a user might reasonably consider against their interests.  Merely putting some general clauses in the license is not sufficient; anything that could reasonably be considered to be contrary to the user's interests should be prominently indicated in a way that enables the user to prevent it.
One gray area in this is software that is provided in exchange for personal data. Many "free" apps are paid for by advertisers who, in addition to the opportunity to advertise to the user, also pay for data about the users. On one hand, this involves the uploading of personal data that the user may not wish to share, but on the other hand it is done as part of an exchange that the user may be happy with. This comes under the duty of disclosure. The software should inform the user that personal data will be uploaded, and should also provide a detailed log of exactly what has been sent. Thus users can make informed decisions about the value of the information they are sending, and possibly alter their behavior when they know it is being monitored.

Monday, March 14, 2016

Letter to my MP about the Investigatory Powers Bill

I've just sent this email to my MP. Hopefully it will make a difference. I've asked for permission to post her reply.


Dear Ms Fernandes,

I am a resident of [redacted]. My address is [redacted]. I am writing to you a second time about the proposed Investigatory Powers Bill. I wrote to you about this on 5th November 2015 urging you to try to mitigate the worst aspects of this bill, and now I am writing to urge you to vote against this bill when it comes to Parliament.

I am deeply concerned about the powers that this bill would give to the Home Secretary. However in order to keep this email reasonably short I will concentrate on one particularly dangerous power.

If this bill becomes law then the Home Secretary would be able to order any "communications company" (the term could mean anyone involved in providing software or equipment that enables communication) to install any surveillance feature the Home Secretary wishes. The recipient of this order would be unable to appeal against it, and would be prevented from revealing the existence of the order. There is no sunset time on this gag clause: it will last as long as the Home Secretary and the security services wish to maintain it.

It is true that these orders will also have to be signed off by a judge, but that will only verify that the order complies with whatever procedures are in place at the time. Furthermore these judges will only ever hear one point of view on the reasonableness and proportionality of the orders, and this can only result in the erosion of these safeguards over time.

I want to illustrate the danger of this power to weaken security by showing how it would impact a common method of selecting encryption keys called Diffie-Hellman Key Exchange. This method is used by web browsers and email programs whenever they make a secure connection (e.g. to web addresses starting "https"). It is also used by "Virtual Private Networks" (VPNs) which are widely used by businesses to allow employees to work remotely, and I expect that Parliament has one to allow MPs to access their email. You may even be using it to read this.

I want to show that any attempt to intercept messages where Diffie-Hellman is used will greatly weaken it, and that this will worsen our security rather than improving it. I will show this by linking the NSA to the compromise of the Office of Personnel Management (OPM) in America last year.

I don't propose to explain the technical details of Diffie-Hellman. What it means is that two computers can exchange a few messages containing large random numbers, and at the end of this they will share a secret key without that key ever having been sent over the Internet.

Suppose that a communications company provides software that uses Diffie-Hellman, and receives an order from the Home Secretary that they must make the encrypted messages available to law enforcement and the intelligence agencies. What are they to do? They never see the secret keys, so they must do one of the following:

1: Modify the software to send a copy of the chosen key to someone. This is far less secure, and also very obvious. Anyone monitoring the packets sent by the programs will instantly see it.

2: Modify the software to make the keys or the encryption weak in a non-obvious way so that the UK intelligence agencies can determine what the key is. For instance, the random numbers might be made more predictable in a subtle way.

These are the only two ways in which the communications company can comply with the order.

We have seen what happens when Option 2 is chosen, because this was done to Juniper Networks firewall product [see ref 1 below]. Someone deliberately inserted "unauthorised code" which weakened the encryption used by this product in a very specific and deliberate way. There is no possibility that this was an accidental bug. The responsible party is widely believed to be the NSA, because secret briefings released by Edward Snowden made reference to the ability to intercept data sent via this product [ref 2], and it would be much easier for the NSA to infiltrate an American company than for anyone else to do it.

However there is something important that happens when software is updated: hackers (including foreign governments) scrutinize the updates to see what has changed. Normally they find that the old version of the software had a security hole which is now patched, so the patch flags up a way to attack computers that haven't been updated yet. But in this case when Juniper issued an update to their firewall software these hackers found the security hole in the *new* software.

Doing this kind of analysis in a systematic way for many security products is a very large job. Doing it in secret requires the resources of a government. So now not only could the NSA intercept communications sent via Juniper firewalls, but so could an unknown number of foreign governments. The Chinese were almost certainly one of them. Other nations known to have invested in  cyber-attack capabilities include the Russia, Israel and North Korea (although the last is probably not as capable yet).

Juniper products are widely used by the US Government. This is likely to have been one of the ways in which the Office of Personnel Management (OPM) was penetrated last year [ref 3]. The Chinese government is the prime suspect in this hack, through which the attackers have obtained copies of the security clearance applications of everyone who has ever worked for the US government.

So it seems that the NSA, by introducing a supposedly secret "back door" into a widely used product, cleared the way for the Chinese to obtain secret files on everyone who has ever worked for their government, including all of their legislators and everyone who works at the NSA. Nice job breaking it, Hero!

Now it is true that this is circumstantial; we have no hard evidence that the Juniper back door was inserted by the NSA, no hard evidence that the Chinese found it, and no hard evidence that this contributed to the OPM hack. But each of these is a big possibility. Even if the OPM hack didn't happen in exactly that way, deliberately weakening security makes events like this much more likely. If the Home Secretary orders a company to introduce weakened security, that fact will become apparent to anyone with the resources to dig for it. Once armed with that fact, they can attack through the same hole.

Furthermore, we would never find out when a disaster like the OPM hack happens under the regime described in the Investigatory Powers bill.  Suppose that, thanks to the weakened security ordered by the Home Secretary, secret government files are obtained by a hostile power, and the communications company executives are called before a Parliamentary Inquiry to account for their negligence; how can they defend themselves if they are legally prohibited from revealing their secret orders?

More generally, we will never be allowed to learn about the negative effects of these secret orders. It would embarrass those who issued them, and they are exactly the people who would have to give permission for publication. So if Parliament passes this bill it will never be allowed to learn about the problems it causes, and hence never be able to remedy the mistake.

I have focused on only one of the measures in the Investigatory Powers bill here, but there are many others in the bill that cause me great concern. To go through the whole bill in this level of detail would make this email far longer, and I know that you have many calls on your time. I can only ask you to believe that there are many similar issues. For these reasons I must urge you to vote against the bill when it reaches the House of Commons.

Yours sincerely,

Paul Johnson.




Saturday, March 28, 2015

Google Maps on Android demands I let Google track me

Updated: see below.

I recently upgraded to Android 5.1 on my Nexus 10. One app I often use is Google Maps. This has a "show my location" button:
When I clicked on this I got the following dialog box:

Notice that I have two options: I either agree to let Google track me, or I cancel the request. There is no "just show my location" option.

As a matter of principle, I don't want Google to be tracking me. I'm aware that Google can offer me all sorts of useful services if I just let it know every little detail of my life, but I prefer to do without them. But now it seems that zooming in on my GPS-derived location has been added to the list of features I can't have. There is no technical reason for this; it didn't used to be the case. But Google has decided that as the price for looking at the map of where I am, I now have to tell them where I am all the time.

I'm aware that of course my cellphone company knows roughly where I am and who I talk to, and my ISP knows which websites I visit and can see my email (although unlike GMail I don't think they derive any information about me from the contents), and of course Google knows what I search for. But I can at least keep that information compartmentalised in different companies. I suspect that the power of personal data increases non-linearly with the volume and scope, so having one company know where I am and another company read my email means less loss of privacy than putting both location and email in the same pot.

 Hey, Google, stop being evil!

Update: 20th April 2015

A few days ago a new update to the Google Maps app got pushed, and its now no longer demanding I let Google track me. In fact the offending dialogue box has now been replaced by one with a "No, and stop pestering me" option, so this is an improvement on what they had before.

Way to go, Google!

Saturday, February 22, 2014

A Review of the joint CNN and BBC production: "The War On Terror"

The War on Terror is the latest epic in the long-running World War franchise. The previous serial in the franchise, World War II, was slammed by the critics for its cardboard-cutout villains, unrealistic hero and poor plot-lines, although it actually achieved decent ratings.

The first season of Terror started with a retcon. At the end of World War II it looked like the Soviet Union had been set up as the Evil Empire for yet another World War, but the writers seem to have realised that replaying the same plot a third time wasn't going to wow the audience. So at the start of Terror we get a load of back story exposition in which the Soviet Union has collapsed for no readily apparent reason, leaving America running a benevolent economic hegemony over the allies from the previous series and also its former enemies, Germany and Japan. There was also mention of a very one-sided Gulf War, apparently to emphasize that America's economic power was still matched by its military, even though it didn't seem to have anyone left to fight. Then in the second episode a bunch of religious fanatics from nowhere flew hijacked airliners into important buildings. While the premise may have been a bit thin the episode managed a level of grandeur and pathos that the franchise hadn't achieved since the Pearl Harbour episode, with the special effects being used to build drama rather than just having huge fireballs. But after this promising start the rest of the season became increasingly implausible, with a buffoonish president launching two pointless wars on countries whose governments turned out to have almost nothing to do with the attack he was trying to revenge. The weak plot and unsympathetic characters make the last few episodes of the season hard to watch.

However in the second season the series grew a beard. The writers replaced the old president with a good looking black guy who clearly wanted to do the right things, finally giving the audience someone to root for, and the focus switched sharply from armed conflict to corrupt politics. Instead of huge set-piece battles featuring ever-more improbable weaponry, the drama now focuses on the political situation within America itself. The battles and weapons are still there of course, but no longer driving the plot. Instead the president is shown as a tragic figure as he tries to stop wars, free prisoners and sort out his country's economic problems, but every time some combination of corporate executive, greedy banker and/or General Ripper will block his reforms, sometimes with an obstructive bureaucrat thrown in for comic relief. He has his hands on the levers of power, but in contrast with his predecessor in World War II those levers don't seem to be connected to anything any more.

Although each episode stands on its own as a story, several plot arcs are becoming clearer as season 2 draws to a close. Events seem to presage the Fall of the Republic, a plot similar to the Star Wars prequel trilogy, but much better done. Whereas Lucas' Old Republic was destroyed by a single corrupt ruler who wanted to become The Emperor, the American Republic in Terror is being destroyed by the very things that made it strong in the previous series: its industrial capacity, financial power and military strength. This is most clearly seen in the episode Drone Strike, where the president was asked to authorise an attack by a remote controlled aircraft against a suspected terrorist convoy on the other side of the world. America is one of the few countries with the technology and money to field these unmanned warplanes, and they have become an important part of American power.  Then we saw the president's face as he was told that the supposed convoy had actually been a wedding party.  At the end of the episode he was reduced to defending his actions at a press conference because the people who had got him into this mess were too powerful to sack.

At the same time there are stories of individual determination and hope set in contrast against the darker backdrop. The recent episode Watching the Watchers showed a soldier and a bureaucrat in different parts of the secret spy agency (or agencies; America seems to have several) independently deciding to rebel against the system they are a part of, by releasing embarrassing secrets to the public. At the same time the episode revealed a hidden factor in previous plot lines. Fans are now reviewing old episodes, even back into the first season, looking for the throwaway lines and improbable coincidences which only now make sense.

The vision of the writers of Terror is now becoming clear; the real war on terror is not the one being fought with guns and robot aircraft, it is the one being fought in the shadows against a loose and ever-shifting coalition of rich, powerful individuals who have discovered that a terrorised population is willing to give them even more money and power, and therefore want to keep it that way. The president's initiatives aren't being blocked by some grand secret conspiracy, its just that all of these people know how to work together if they want stop something happening. But this actually makes them more dangerous; in a conventional conspiracy story the hero just has to find the conspiracy and unmask them, but that isn't going to happen in Terror. In one chilling scene a club of bankers get together for a party to laugh at the rest of the country for continuing to pay them huge amounts after they have wrecked the economy that they were supposed to be running. A journalist sneaks in and tells the story, but it doesn't make any difference because throwing a party is not a conspiracy.

So Terror goes into its third season in much better shape than it was at the end of the first. The writers have escaped from the constraints of set-piece battles between huge armies, and found instead a solid theme of individual heroism in a believable world of ambiguous morality and complex politics. It all makes for powerful drama and compelling viewing.

Friday, October 11, 2013

TV Resolution Fallacies

Every so often discussion of the ever-higher resolution of TV screens generates articles purporting to prove that you can't see the improvement unless you sit a few feet from the largest available screen. Most of these articles make the same three mistakes:

Fallacy 1: Normal vision is 20/20 vision

The term "20/20 vision" means only that you can see as well as a "normal" person. In practice this means that it is the lower threshold below which vision is considered to be in need of correction; most people can see better than this, with a few acheiving 20/10 (that is, twice the resolution of 20/20).

Fallacy 2: Pixel size = Resolution

If a screen has 200 pixels per inch then its resolution, at best, is only 100 lines per inch because otherwise you cannot distinguish between one thick line and two separate lines. For the more technically minded, this is the spatial version of the nyquist threshold.  Wikipedia has a very technical article, but this picture demonstrates the problem:

The pixel pitch is close to the height of a brick, leading to the moire pattern because in some areas the pixels are focused on the middle of a brick and in some areas the pixels are focused on the white mortar.

So the resolution of the screen in the horizontal or vertical directions is half the pixel pitch. But it gets worse as soon as you have some other angle because those pixels are arranged in a grid. The diagonal neighbours of a pixel are 1.4 times further apart than the horizontal and vertical ones, so the worst-case resolution is the pixel pitch divided by 2*1.4 = 2.8. Call it 3 for round numbers.

So the conclusion is that the actual resolution of the picture on your screen is about 1/3 of the pixel pitch.

Fallacy 3: Resolution beyond visual acuity is a waste

The argument here seems to be that if HDTV resolution is better than my eyesight then getting HDTV is a complete waste and I would be better off sticking to my normal standard definition TV.

Clearly this is wrong: as long as my visual resolution outperforms my TV then I will get a better picture by switching to a higher definition format.

So when does HDTV become worth it?

20/20 vision is generally considered to be a resolution of 1 arc-minute. If we use the naive approach with all three fallacies then one pixel on a 40 inch HDTV screen subtends 1 arc-minute at a distance of 62 inches, so some articles on the subject have claimed that unless you sit closer than that you don't get any benefit

However on that 40 inch screen a standard definition pixel will be roughly twice the size (depending on which standard and what you do about the 4:3 aspect ratio on the 16:9 screen), so it will subtend 1 arc-minute at around 124 inches (just over 10 feet).  So with 20/20 vision you will be able to separate two diagonal lines separated by one pixel at a distance of 30 feet, and with 20/10 vision that goes out to 60 feet. So if you sit less than 30 feet from a 40 inch screen then you will get a visibly better picture with HDTV than standard definition.

And what about Ultra HD?

With 20/20 vision you can just about distinguish two diagonal lines one pixel apart on a 40 inch HDTV screen from 15 feet away, and 30 feet if you have 20/10 vision. So if you sit closer to the screen than that then you will get a better picture with Ultra HD. And of course Ultra HD sets are often bigger than 40 inches. If you have a 60 inch set then the difference is visible up to 23 feet away with 20/20 vision and 46 feet with 20/10.

So higher resolutions are not just marketing hype.

Final point: Compression artifacts

Digital TV signals are compressed to fit into the available bandwidth. This shows up in compression artifacts; if there is a lot of movement across the image then you may see it become slightly blocky, and if you freeze the image then you can often see a kind of halo of ripples around sharp edges. Higher definition pictures are encoded with more data so that these artifacts are reduced. So even without the increased resolution you may still see an improved picture in a higher resolution format.

Friday, May 24, 2013

Elevator pitch for Haskell short enough for an elevator ride

Greg Hale has written an "elevator pitch" for Haskell. While it is certainly a good piece of advocacy, it is quite long, and therefore not an elevator pitch. The idea of an elevator pitch is something you can deliver in the 30 seconds or so that you find yourself sharing an elevator with a potential investor.

I've been looking for an effective Haskell elevator pitch for some years now, but the only thing I was able to come up with was just that you can deliver software better, faster and cheaper because you need fewer lines of code. This just sounds like hype.

However I think I've now got something better. Here it is:

Conventional languages make the programmer construct both a control flow and a data flow for the program. There is no way to check they are consistent, and anytime they are inconsistent you get a bug. In Haskell the programmer just specifies the data flow: the control flow is up to the compiler. That simplifies the program, cutting down the work and completely preventing a big class of errors.