OhMyGoodness, getting rid of affordances makes it harder on users? Who knew?

As seen on /.: It’s official: Users navigate flat UI designs 22 per cent slower

Have you ever bonked your head on a glass door because you had no clue how to open the door–because the architects decided to make the design “clean” by getting rid of anything that ruined the clean lines of the door?

Yeah, that’s our modern UI environment.

Door

I promise you this is a picture of a door. Do you see how to open it?


I mean, look at the examples provided here.

First, let’s dispense with the stupid items listed as “features of flat design”. They list, amongst the supposed advantages of a flat design “functionality”, “close attention to details” and “clear and strict visual hierarchy”–because before the invention of flat design, none of us wanted to deliver functionality, and most of us slopped our user designs together in the same way we slop pigs. (*eye roll*)

And let’s look at the supposed “advantages”: “simplicity of shapes and elements”, “minimalism” and “avoiding textures, gradients and complex forms.”

Which suggests to me the problem with the photo of my door above is that it contains a complex shape and unclear hierarchy which distracts from the functionality of the door.

Here. Let me fix that.

Door2

I know the difference is subtle, but to the purist, makes the door much better looking. No more distractions from the pure essence of a door, one that has a single unitary shape, a minimalist door free of visual distractions.

Right up until you face-plant yourself because you can’t open the god-damned thing.

I mean, look at the animated example they give:

1 itTTnLfQEtyWHBZMwO93aQ

Setting aside the cute (and distracting animation of the weather icon to the side), how does the user know that by tapping and dragging he expands and shrinks a region? How does he know that it doesn’t scroll the page instead? Or that tapping (instead of swiping) would expand or shrink an area? Or that tapping instead pulls up an hourly prediction?

How does he know that swiping left and right gives the previous and next day’s weather prediction?

And notice the design is not entirely free from complex shapes. The two icons in the upper right? Is that two separate buttons, or a single toggle (as the shading suggests)?

Or notice the location in the Ukraine. Is the location tappable? Can we pick a new location?

The key here is that the user does not have a fucking clue. And let’s be honest: there is no delight in a “discovery” which seems more designed to make the user feel like a stupid idiot.

I’m not going to even address the complex and superfluous animations which, while cute, and may even be demanded in some markets, exist only to say how great the application is, but provide absolutely no aid to user comprehension.


Look, I’m not asking for buttons and checkboxes and the like.

It’s not like you have to beat your users over the head; you can have clean lines and still use affordances which subtly guide the user on how to use your application. Just create a consistent visual language so that, for example, all shapes with a small dot in the corner can be resized by dragging.

But I am suggesting if the user needs to spend time figuring out how to open the door, they’re less likely to go through the door.

And you lose users. And revenue.

Some thoughts on designing a computer language.

Designing a computer language is an interesting exercise.

Remember first, the target of a computer language is a microprocessor or a microcontroller. And microprocessors or microcontrollers are stupid: they only understand integers, memory addresses (which are just like integers; think of memory as organized as an array of bytes), and if you’re lucky, floating point numbers. (And even there, they’re handled like integers but with a decimal point position. Stored, of course, as an integer.)

Because of that, most modern computer languages rely on a run-time library. Even C, which is as close to writing binary code for microprocessors as most of us will ever get, relies on a run-time library to handle certain abstract constructs the microprocessor can’t. For example, a ‘long’ integer in C is generally assumed to be at least 32-bits wide–but if you’re on a processor that only understands 16-bit integers, any 32-bit operation on a long integer must be handled with a subroutine call into a run-time library. And heck, some microcontrollers don’t even know how to multiply numbers, which means a * b has to translate internally into __multiply(a,b).

For most general-purpose programming languages (like C, C#, C++, Java, Objective-C, Swift, ADA and the like), the question becomes “procedural programming” or “object-oriented programming.” That is, which paradigm will you support: procedures (like C)? or objects? (like Java)

Further, how will you handle strings? How will you handle text like “Hello world?” Remember: your microprocessor only handles integers–not strings. And under the hood, every string is basically just an array of integers: “Hello world?” is stored in ASCII as the array of numbers [ 72, 101, 108, 108, 111, 32, 119, 111, 114, 108, 100, 63 ], either marked somewhere with a length, or terminated with an end of string marker, 0.

In C, a string is simply an array of bytes. The same in C++, though C++ provides the std::string class which helps manage the array. In Java, all strings are translated internally into an array of bytes which is then immediately wrapped into a java.lang.String object. (It’s why in Java you can write:

"Hello world?".length()

since the construct “Hello world?” is turned into an object.) Objective-C turns the string declaration @”Hello world?” into an NSString, and Swift into a String type, which is related to NSString.

Declarations also become interesting. In C, C++ and Objective-C, you have headers which forces your language to provide a mechanism for representing external linkage. Those three languages also provide a mechanism for representing abstract types, meaning for every variable declaration:

int *a;

which represents the variable a that points to an integer, you must be able to write:

int *

which represents the abstraction of a variable which points to an integer.

And for every function:

int foo(int a, int *b, char c[5]) {...}

You need:

extern int foo(int, int, char[5]);

But Java does not provide headers, so it has less need for header declarations–but then adds the need to mark methods as “public”, “protected” or “private” so we know the scope of methods and variables which can be hidden in C by simply omitting the declaration from the header.

This means Java’s type declaration system can be far simpler than C’s.

And while we’re at it, what types are you going to allow? Most languages have integer types, floating point types, structure or object types (which basically represent a record containing multiple different internal values), array types, and pointer or reference types. But even here there are differences:

C allows the use of unsigned values, like unsigned integers. Java, however, does not–but really, the only effective difference in performing math operations between signed and unsigned integers are right-shift operations and compare operations. And Java works around the former with the unsigned right shift (‘>>>’) operator.

C also represents arrays as simply a chunk of memory; C is very low level this way. But Java represents arrays as a distinct fundamental type, alongside basic types (like integers or floating point values) and objects.

And pointers or references can be explicit or implicit: C++ makes this explicit by requiring you to indicate in a function if an object or structure is passed by value (that is, the entire object is copied onto the stack), or by reference (that is, a pointer is passed on the stack). This makes a difference because updating an object passed by value has no effect on the caller. But when passed by reference, changes to the object can affect the caller’s copy–since there really is only one copy in memory.

Java, on the other hand, passes objects and arrays by reference, always.

This passing by reference makes the ‘const’ keyword (or its equivalent) very important: it can forbid the function being called from modifying the object passed to it by reference.

On the flip side, Java does not have the concept of a ‘pointer’.

And let’s consider for(...) loops. The C language introduces the three-part for construct:

for (initializer, comparator, incrementer) statement

which translates into:

        initializer
loop:   if (!comparator) goto exit;
        statement
        incrementer
        goto loop;
exit:

But Java and Objective C also introduce different for loop constructs, such as Java’s

for (type variable: container) statement

which iterates the variable across the contents of the container. Internally it is implemented by using the Java’s Iterator interface, and translates the for loop above as:

        Iterator<type> iterator = container.iterator;
loop:   if (!iterator.hasNext()) goto exit;
        type variable = iterator.next();
        statement
        goto loop;
exit:

Of course this assumes container implements the Iterable interface. (Pro-tip: If you want to create a custom class which can be used as the container in a for loop, implement the Iterable interface.)

While we’re at it, if your language is object oriented, do you allow multiple inheritance, like C++ where an object can be the child of two or more parent objects? Or do you implement an “interface” or “protocol” (which specifies methods that are required to be implemented but provides no code), and have single inheritance, where objects can have no more than one parent object but can have one or more interfaces, such as in Java or Objective C?

Do you make exceptions a first-class citizen of your language, such as in Java or C++? Or is it a library, such as C’s setjmp/longjmp calls? Or is it even available? Early versions of Pascal did not provide for exception handling: instead, you must either explicitly handle problems yourself, or you must check to make sure that things don’t go haywire: that you don’t divide by zero, for example.

And we haven’t even dived into more advanced features. We’ve just stuck with the stuff that most general purpose languages implement. Ada has built-in support for parallel processing by making threads and synchronization part of the language. (Languages like C or Swift require a library–generally based on POSIX Threads–for parallel processing, though the availability of multi-threaded programming in those languages are optional.)

Other languages have built-in handling of mathematical vectors and matrices, or of string comparison and regular expressions. Some languages (like Java or LISP) provide support for lambda functions. And other languages combine domain-specific features with general purpose computing–such as PHP, which allow general-purpose programs to be written, but is designed for web pages.

Pushing farther afield, we have languages such as Prolog, a declarative language which defines the formal logic rules of a program without declaring the control flow to execute the rules.

(Prolog defines the relationships between a collection of rules, and performs a search through the rules in response to a query. Such a language is useful if we wish to, for example, provide a list of conditions that may be symptoms of a disease; a Prolog query would then list the symptoms, and after execution provide a list of diseases which correspond to those symptoms.)

But let’s ignore stuff like this for now, since my interest here is either procedural or object-oriented programming. (One could consider object-oriented programming as basically procedural programming performed on objects.)


The design of a programming language is quite interesting.

And how you answer questions like this (and other questions that may come up) really determine the simplicity of learning verses the expressive power of the language. Sadly, expressive power can become confusing and harm learning: just look at the initial promise of Swift as an easy and painless language to learn. A promise that has since been retracted, since Swift is neither a stable language (Swift 1 does not look like Swift 4), nor simple. Things like the type safety constructs ? (optional) or ! (forced) are hard to understand, since they rely on the concept of “pointers” and the safety (or lack thereof) of dealing with null pointers (that is, pointers to memory address 0, which typically means “not initialized” or “undefined”).

Or just look at how confusing the C type system can become to a beginner. I mean, it’s easy for a beginner to understand:

int foo[5];

That’s an array of 5 integers.

But what about:

char *(*(**foo[][8])())[];

What the hell???

Often you find C programmers avoiding the “expressive power” of C by using typedefs instead; declaring each component of the above as an individual type.

It is in large part because of C’s “expressive power” (combined with terse syntax) which allows contests like the International Obfuscated C Code Contest to exist: notice we don’t see an “obfuscated Java contest”.

Behold, a runner up in that contest.

But at least it isn’t APL, a language once described to me as a “write-only programming language” because of how hard it is to read, making use of special symbols rarely found on older computers:

(~R∊R∘.×R)/R←1↓ιR

This is the Wikipedia example of an APL program which finds all prime numbers from 1 to R.

No, I have no clue how it works, or what the squiggly marks mean.

Simplicity, it seems to me, forgoes expressive power. Java, for example, cannot express the idea of an array of pointers to functions returning pointers to arrays–since Java does not have the concept of a pointer to a function (that’s handled by the reflection API), or does Java have the concept of pointers. Further, Java does not permit the declaration of complex anonymous structures; first, everything is a class. And second, classes are either explicitly named or implicitly named as part of an anonymous declaration. It’s hard to declare something like the following C++ declaration; you’re forced to break down each component into its own declaration.

struct Thing {
    struct {
        int x;
        int y;
    } loc;
    struct {
        int w;
        int h;
    } size;
};

And it’s just as well; this makes more sense if you were to write:

struct Point {
    int x;
    int y;
};

struct Size {
    int w;
    int h;
};

struct Thing {
    Point loc;
    Size size;
};

It becomes clear that “Thing” is a rectangle with a location and a size.

But then, people often complain that Java requires a lot more typing to express the same concept.


It’s a balance. It’s what makes all this so fascinating.

Quiet Insanity and YACC.

One of the things I wanted to do involves having a parser generated from a grammar, similar to YACC.

But I need the code generated in Objective C. And I need a parser that is re-entrant, so it can be run in a separate thread.

Now there are a number of solutions out there. But what I want is an LR(1) or GLR based parser built via a state machine which can be incorporated into Xcode and which generates Objective C code that can be used in an iPhone or iPad.

And let’s be honest, a lot of advise out there is really fucking stupid. Like this:

Code generation is not the “true way” in dynamic languages like Objective-C. Anything that can be achieved by a parser generator can be achieved at runtime. So, I’d suggest you try something like ParseKit, which will take a BNF-like grammar, and give you various delegate hooks you can implement to construct your parser.

That sound you just heard was my eyes rolling.

The reason, by the way, why you may wish to precompile a grammar rather than compile it at runtime is because generally (a) your grammar won’t change, and (b) the more computational time you can use evaluating the grammar, the more compact the grammar can be that you generate.

So without any real good solutions I thought how hard can it be to roll my own?

Well, fairly hard, mostly because the documentation out there for constructing LR(1) grammars sorta sucks.

So I started writing a document that attempts to describe the LR parsing algorithms out there in sufficient detail to roll my own YACC-like parser.


I haven’t quite figured out the GLR grammar parsing techniques in “Efficient Parsing for Natural Language, so it hasn’t been included yet.

But the rest should be there.

And sadly the whole thing grew to 66 pages in length, even without the GLR stuff.

This is a preliminary version, of course. Eventually I plan to upload all this to my GitHub account.

Voyager I and II

The Loyal Engineers Steering NASA’s Voyager Probes Across the Universe


Back in the late 1980’s I worked for the Computer Graphics Lab at JPL on the Voyager II/Neptune flyby, alongside Sylvie Rueff and Jeff Goldsmith.

During the week where Voyager II reached Neptune, the Public Information Office at JPL made up several signs which were placed around the JPL campus, which showed the week of events–when Voyager II would reach certain locations, and when certain observational data would be downloaded. After the event was over, someone who I worked with in the Voyager II team (whose name escapes me now) grabbed the signs and gave one to each of us as a memento of the event.

All these years later, I now have it hanging in my house in Raleigh.

IMG 4758

Each of us who worked on the Voyager II mission also received a limited edition coin in commemoration of the event:

IMG 4759
The back has a replica of the famous Golden Record that was attached to the Voyager probe.

And I also kept the book published by JPL outlining the Voyager II flyby of Neptune.

IMG 4762


I’ve worked in several places since, and I’ve done a lot of interesting things in my career.

But this is the event I will remember the most fondly.

My comment to the FAA regarding a request for comments.

After reading about a request from the FAA regarding a redesign of the Embraer ERJ 190-300 aircraft on Schneier on Security, I took a moment to leave a comment with the FAA.

Note that my reading of the original call for public comments, it seemed to me what Embraer wants to do is create a single network off of which all equipment–from in-flight entertainment to some of the aircraft avionics–on a single network which includes off-the-shelf operating systems.

That is, I believe Embraer is not moving towards “security through obscurity” (though this may be an effect of what they are doing), but towards using more off-the-shelf components and towards using network security in place of physical security when designing in-flight electronics. (Current aircraft provide for network security by physically separating in-flight passenger entertainment systems from aircraft avionics.)

With that in mind, I left the following comment with the FAA.

Note that I did this without my usual first cup of coffee, so it’s not very polished.

(Oh, and as a footnote: if you choose to leave a comment with the FAA, be polite. And realize that, unlike most bureaucracies in the Federal Government, the FAA tends to attract people who love to fly or who love to be around airplanes. So if you leave a comment, realize you’re leaving a comment with a bunch of old pilots: very smart folks who are very concerned about aircraft safety, and who are trying to learn about new technologies to make sure the system remains the safest in the world.)

From the background information provided in FAA-2017-0239, it sounds to me like the new design feature of the Embraer Model ERJ 190-300 aircraft is the use of a single physical network architecture which would tie in equipment in the “passenger-entertainment” domain and the “aircraft safety” domain, and provide for separation of these domains through network security. That is, it would provide isolation in these domains through software, and would therefore require network security testing in order to verify the separation of these domains.

When considering computer security, it is useful to classify the potential problems using the “CIA triad”: confidentiality, integrity and availability.

In this context, confidentiality is the set of rules and the systems and mechanisms which limit information access to those who are authorized to view that information. Integrity are the measures used to assure the information provided is trustworthy and accurate. And availability is the guarantee that the information can be reliably accessed by those authorized to view the information in a timely fashion.

Taken individually we can then consider the types of attacks that may be performed from the passenger-entertainment domain against the aircraft security domain which may cause inadvertent operation of the aircraft.

Take attacks against availability, for example. Attacks against availability include denial of service attacks: one or more malfunctioning pieces of equipment flood the overall network with so much useless data that it chokes off the flow of information across the network. A similar attack, known as a SYN flood attack, attempts to abuse the underlying TCP/IP network protocol’s three-way handshake connection protocol to prevent a piece of equipment from responding to a network connection.

Embraer needs to demonstrate if they use a unified network architecture that the avionics and navigation equipment in their aircraft can survive such a denial of service attack against network availability. This means they need to demonstrate that map products and weather products are not adversely affected, that control services and auto-pilot functionality is not adversely affected, and that attitude, direction and airspeed information is not adversely affected during a denial of service attack. (If the test is to be passed by resorting to back-up indicators, the system must clearly visually indicate to the pilot that avionics are no longer reliable or available.)

Attacks against integrity include spoofing network packets–equipment plugged into the passenger-entertainment domain which spoof on-board data-packets sent from various sensors, and session hijacking: creating properly shaped TCP/IP packets which intercept a network connection between two pieces of equipment. (A sufficiently sophisticated attacker can, with a laptop connected to a network, create a seemingly valid IP packet that appears to come from anywhere in the network.)

Embraer needs to demonstrate that the network protocol they use to communicate between pieces of equipment are hardened against such spoofing. Hardening can be performed through using end-to-end encryption and through the use of checksums (above that provided by TCP/IP) which validate the integrity of the encrypted data. Embraer will also need to have processes in place which allow these security encryption protocols to be updated during maintenance in the event a disgruntled employee leaks those security protocols.

Embraer also needs to demonstrate their software is hardened against “fuzzing”. Fuzzing is an automated testing technique that involves providing invalid, unexpected or random data to a system and making sure it doesn’t fail (either stop working or provide inaccurate information to the pilots). Fuzzing can be performed both by providing completely random inputs, and by varying valid packets by small amounts to see if it creates problems with the system being tested.

In proving that systems are hardened against such protocol attacks, Embraer needs to demonstrate not only that invalid information does not flow into the aircraft-safety domain, but they must also demonstrate that such attacks do not shut down communications between the aircraft-safety domain and various sensors sending vital data.

Confidentiality attacks include passive attacks: users sniffing all the data on a network and eavesdropping on the data being sent and received, and active attacks: gaining internal access to systems which they are not authorized to access. Confidentiality attacks can lead to attacks against availability (by, for example, locking out someone using a password change) and integrity (by, for example, corrupting IFR map products).

Embraer needs to demonstrate their software is hardened against unauthorized access, both to the flight sensor data being sent across the network (through end-to-end encryption) and by verifying access to in-flight data products are restricted to authorized users (by using some form of access control, either though the use of a physical key, a password or biometric security: the “three factors” of authentication). For example, one way to satisfy this requirement is met for mapping products is to require that map data can only be changed from within the cockpit.

Of course, as always, Embraer needs to demonstrate that back-up attitude, directional and air-speed indicators continue to work in the event of a problem with the avionics, that the avionics provides a clear indication that the information they are providing may be invalid due to failure or due to a network attack. And Embraer needs to demonstrate the flight controls of the aircraft continue to operate in the event in-flight avionics are compromised. This includes making sure that all flight control systems are kept separate from the in-flight avionics, with the exception of the autopilot (which should not prevent the pilot from taking control with sufficient force).

I understand that this is an incomplete list of potential security attacks. But by classifying attacks in a networked environment along the “CIA triad”, it should be possible to create and maintain a comprehensive list of tests that Embraer can perform to assure the integrity of the aircraft security domain.

Thank you for your time.

– Bill Woody

One of my biggest complaints about the way we approach government is that we constantly complain about the lack of experts in government–but then, those of us who have some expertise never bother to publicly participate in what is supposed to be a democratic process.

So even if my comments are utterly worthless, they are far better than the expert in the field who never says anything.

A clever man in the middle attack which is why you should always receive reset instructions by e-mail.

How hackers can steal your 2FA email account by getting you to sign up for another website

tl;dr: When building a web site, NEVER create a reset password flow that asks security questions. Always send an e-mail with reset instructions.

In a paper for IEEE Security, researchers from Cyberpion and Israel’s College of Management Academic Studies describe a “Password Reset Man-in-the-Middle Attack” that leverages a bunch of clever insights into how password resets work to steal your email account (and other kinds of accounts), even when it’s protected by two-factor authentication.

Here’s the basics: the attacker gets you to sign up for an account for their website (maybe it’s a site that gives away free personality tests or whatever). The sign-up process presents a series of prompts for the signup, starting with your email address.

As soon as the attacker has your email address, a process on their server logs into your email provider as you and initiates an “I’ve lost access to my email” password reset process.

From then on, every question in your signup process for the attacker’s service is actually a password reset question from your email provider. For example, if your email provider is known to text your phone with a PIN as part of the process, the attacker prompts you for your phone number, then says, “I’ve just texted you a PIN, please enter it now.” You enter the PIN, and the attacker passes that PIN to your email provider.

Same goes for “security questions” like “What street did you live on when you were a kid?” The email provider asks the attacker these questions, the attacker asks you the questions for the signup process, and then uses your answers to impersonate you to the email provider.

This has some serious consequences with account sign-up and password reset flows that do not involve a secondary channel, such as sending an e-mail or replying to an SMS message.

Note that this attack is insidious because it appears you’re answering questions on a third party web site, as that third party is using your answers to attack a trusted bank account.

Do you want to know the reason why? Cocoapods.

The Size of iPhone’s Top Apps Has Increased by 1,000% in Four Years

According to Sensor Tower’s analysis of App Intelligence, the total space required by the top 10 most installed U.S. iPhone apps has grown from 164 MB in May 2013 to about 1.8 GB last month, an 11x or approximately 1,000 percent increase in just four years. In the following report, we delve deeper into which apps have grown the most.

Do you want to know why?

Poor software management, and the increasing reliance on libraries which add code bloat.

The former happens when designers and product managers try to fit more and more features in to an app, and developers are rushed to add features wind up implementing the same functionality five or six different times.

And the latter–well, no-one has ever been fired for using Cocoapods (or another library manager) and sucking in functionality from a third party library of twenty.

One project I worked on a while back, the project manager didn’t tell me he had someone else working on the project as well–and while I noted it in the check-ins, I didn’t think anything of it, until one day suddenly two dozen libraries were checked in. I slow-walked my way out of that project starting that day, in part because the other developer replaced a very simple network interface (which worked reliably) with a link to a half-dozen libraries and a rewritten network interface which didn’t work correctly. (For some reason he thought including AFNetworking v2 was better than simply hitting NSURLSession–despite the fact that AFNetworking v2 uses the older NSURLConnection class–and despite the more important fact that he was using AFNetworking wrong.)

So if you’re an iOS developer and you’re wondering why your app is bloated?

Look in a mirror.

Yes, Apple’s tools have contributed to the problem somewhat. And yes, various resolution of artwork has helped–though even there I’d argue one huge problem there is that so many developers punt on gradients and other special effects by including a huge .PNG file rather than using CAGradientLayer and other API entry points.

But in the end, software bloat is coming from poorly built applications constructed by iOS developers who don’t know what they are doing being rushed by product management to deliver garbage.