10 reasons why you can do semiotics and a machine can’t

by Jul 18, 2018Semiotics

Photo by Chris Benson on Unsplash

Automating semiotics is all the rage – but semiotic analysis requires much more than mechanically sorting images and assigning abstract meanings to them or their component parts. Relying on automation risks getting analysis very, very wrong – and that’s bad news for brand owners. Here are 10 semiotic puzzles that only humans can solve.

1.    Machines can aggregate data and arrange it in different ways – but this is not in itself analysis or interpretation. It is data processing. Sometimes human-produced semiotics suffers from this too, when the practitioner doesn’t quite know what they are doing and has limited analytic resources to work with. Semiotics becomes reduced to a pack-sorting exercise that you could have had focus group respondents do for a fraction of the cost.

Example: this superficial type of analysis will group items of #food packaging when they use green or include semiotic signs for nature and say that it means ‘health’. But consider these problematic examples, where semiotic signs for #health are unsupported or else overridden by other semiotic signs that mean something else. The fact is, an individual semiotic sign almost never carries meaning all by itself. Its meaning almost always depends on what other semiotic signs are in its surroundings – including the signs it is sitting right next to.

Both of these are flavour variants. The huge apple on the Tango pack signifies ‘tasty’, not ‘healthy’ (although it may cue ‘contains some apple juice’ as a secondary message).

2.      Machines cannot detect irony. This is why you shouldn’t use automation to detect conversational ‘themes’, especially not if you are going to call it semiotics.

Anti-Trump protestors in London, on the occasion of his visit. Deliberate understatement that trades on the reader’s knowledge of #British idiom (and not Trump’s knowledge).

The sign on the left isn’t about biscuits. Automated systems don’t know the word ‘horcrux’ (because it is a very uncommon word) and cannot interpret the meaning of the sentence in which it is found.

3. They cannot detect or interpret simulation.

Nature. Inhospitable and not particularly attractive.

A simulation of nature (all readers of Baudrillard will get this reference). People – especially golfers – often like simulations of nature a lot more than they like the real thing.

A simulation of a simulation of nature. Grand Theft Auto V (2013) – a tremendously successful video game that sold 85m copies. These three images are not equivalent to each other, even though they all seem to depict ‘nature’ and two seem to depict ‘golf’.

4. Machines don’t read. Automated systems can only self-refer data sets to other data sets, they can’t recognise something new in data to by linking it to outside literature or experiences. Their ability to answer the question ‘where have I seen this before’ is extremely limited. They are only as sophisticated as the person who programmed the machine, on the day they programmed it. If the person hasn’t read ‘Orientalism’ (Said, 1978) or ‘White’ (Dyer, 1997), then the system isn’t going to discover academic research and theory by itself.

Left: a Gaultier #fragrance ad (1993). It is obviously sexual but also has racial undertones. It did not miraculously appear out of the blue or occur in a vacuum; it continues a long European tradition known as Orientalism, in which Europeans entertained unrealistic and racist fantasies about women being kidnapped, stripped and forced to smoke opium in a harem. Middle: 19th-century Orientalist painting by E. Debat-Ponsan (1883). Right: Edward Said’s bookIf the person who programmed an automated system doesn’t know about Orientalism, the system isn’t going to write Orientalism into its own code, because automated systems don’t go to the library.

5.     Machines can’t do ideological analysis. They can’t detect power relations such as race or #gender relations. They can’t tell when something is homophobic or transphobic and they especially can’t keep up with rapidly-changing social ideas of what those words even mean (humans find it hard enough, things are changing very quickly).

How many women are on this magazine cover? Hint: possible answers include ‘two’ and ‘zero’. If you’re not sure, don’t guess out loud and risk getting it wrong because 2 and 0 are both very inflammatory answers – and the ‘right’ answer tomorrow might not be the same as it is today.

6. Machines can’t explain the difference in meaning between a realistic photograph, a computer-generated image and a hand-drawn illustration and they can’t keep up with the reasons why these techniques are varyingly persuasive in different categories.

Tropicana: photo-realism (right) is a great way to communicate ‘unspoiled, authentic ingredients’ – except when it isn’t. Tropicana famously rebranded from left to right and then back again when people hated the new packs. There are two problems here that both could have been solved by semiotics. First, the pack on the right is using way too many semiotic signs from a #pharmaceutical code, making the product look like medicine. Secondly, my own semiotic research into non-alcoholic drinks showed that a whole, unbroken fruit on the front of juice and squash packs communicates … wait for it … wholesome. Note that ‘wholesome’ isn’t the same as ‘exciting’, which is why Tango and other carbonated fruit drinks don’t look like Tropicana.

Left: photorealism used with better (although still not perfect) results. Right: Ella’s Kitchen – sometimes a crude, hand-drawn illustration is best of all (because it is currently a semiotic sign for #organic).

7.      They don’t understand self-conscious and reflexive ideas like #retro and why (for example) 1970s styles, if they can even recognise the 1970s, are retro in one case but in another case, simply out of date.

Historical food packaging. These simple, graphic shapes, abstracted designs and bright but restricted colour palettes are semiotic signs which can be imported in to modern products.

Soda Press soft drinks packaging demonstrates how to do retro, using simple graphic techniques. The result is contemporary, self-aware and in line with current trends.

Hungry-Man. A contemporary product, but the only clue is the ‘excellent source of protein’ badge, otherwise this could have come straight from a museum. 1970s food packaging regularly included greasy, slightly obscene, close-up photos of food. This is a technique which is difficult to import into modern food packs with any success – the results usually appear out of date and not fashionably retro.

8.      They can only read presence and not absence. They can’t make sense of silences, pauses and empty space.

Gelderlandplein, an upmarket shopping mall in the Netherlands that somewhat resembles an airport. Dutch culture places a very high premium on understatement and restraint. Grey is a colour. Less is more. It’s better not to cover everything in decoration where that can be avoided. In strong contrast, malls in America (and in some other countries such as Malaysia) believe that more is more and are a riot of brightly-coloured visual activity.

What is this? (i) a blue rectangle; (ii) an incredibly important French painting that sold for $9m in 2015; (iii) both. Bonus question: name any software that can detect a difference.

A screenshot from a conversation on LinkedIn where I’m relating a conversation that I had with a border control guard at Chicago O’Hare as I tried to enter the US. The three-dot ellipsis is a common linguistic token that indicates silence or deliberate non-response, the withholding of a response. Ellipses are tricky for automated systems which are programmed to look for words but not for non-words. If we then consider audio recordings of conversations, silences and pauses become impossible for automated systems to interpret. The silent non-response needs a human semiologist to make sense of its meaning and purpose within a conversation. There’s no way for a machine to detect the difference between someone meaningfully and purposefully failing to answer, or failing to answer because they were simply distracted or there was a technical fault in the recording.

Another interesting conversational token in this transcript is the set of three digits “111”. Just like “…”, “111” is not a word and has no pronunciation. There is no software currently in existence which can detect and interpret it. It is a semiotic sign that has emerged from the culture of the internet and specifically conveys not just outrage (hence its location among exclamation points), but naive outrage – it implies that the person to whom it is attributed does not fully understand the subject that is making them upset.

9.      They offer superficial and abstracted analysis that becomes detached from culturally available evidence outside of the text. They will tell you that every image of an adult holding a baby means ‘nurturing’ because some images of people holding babies mean that. This is a very dangerous conclusion, as you can see from these images below, in which something very significant is certainly happening, but ‘nurturing’ is absolutely the wrong word to describe the event. If we rely on an automated system to organise images into categories with names such as ‘nurturing’, especially if there’s some quantitative element to the reporting, analysis will go off the rails very quickly because interpreting images like the ones below requires large amounts of background cultural knowledge. These are not just any adults, the children are not their own, the adults have a purpose unrelated to childcare.

10.  Automated systems regard meaning as located within semiotic signs – but semiotic theory tells us that meaning is located everywhere but the sign – it is located in everything that the sign excludes.

A screenshot from a conference paper I gave several years ago which explains this idea from the perspectives of both semiotics and discourse analysis. Semiotics tells us that the sign ‘war’ gains its meaning from everything which it excludes, commonly summarised as ‘peace’. Discourse analysis tells us that semiotic binaries themselves are not fixed and are highly variable depending on the conversational setting. So, for example, a black vs white binary means one thing to Pantone and quite another to the English Defence League.

What do you think? Did I miss any reasons why you can do semiotics and a machine can’t? Have you designed an automated system that can solve all these puzzles?

All the images included here are for educational purposes only.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

© 2021 Lawes Consulting. All rights reserved.
Website By the Scruff

This site uses cookies. Please read our updated Privacy and cookie policy.