This autumn Edelman hosted a cyber security predictions event for Symantec (client), where specialists and industry leaders gazed into the future to discuss the disruptive technology trends that will transform the business and security landscape. Drawing inspiration from the speakers’ forecasts, I looked at the past 12 months to find how the technological phenomena they highlighted have already started to unfold.

FAKE NEWS CAUGHT US OFF GUARD

“If we look at the last year or so, we’ve had insights into the impact of fake news. But what has changed and what has caught us surprisingly off guard, is the way we consume news and also how we share those stories. The way we trust our personal social network is really fascinating,” said Dr Jessica Barker, sociologist and Co-Founder of security consultancy Redacted.

Indeed, in 2017 we were unprepared for the impact fake news could have on events of critical significance for global public affairs. By fuelling scepticism around online stories and promoting misinformation, fake news also left a mark on consumers’ trust in mainstream media.

Although the introduction of regulation and increased measures for tackling fake news on social media will give us more tools for challenging its impact in the future, we need to act now. We have a responsibility to raise awareness of the phenomenon that fake news is, further ensuring we are educating others to be mindful of how fake news sharing and tolerance could have serious implications.

WE DIDN’T SEE WANNACRY COMING

Ian Levy, Technical Director at the National Cyber Security Centre noted: “The trajectory I see around how cyber security is talked about, about how people think they cannot defend themselves, is actually really dangerous.”

Nonetheless, earlier this year the WannaCry ransomware attack left businesses and public organisations around the world paralysed in a successful campaign of mass disruption. While the learnings from the cyber incident would most certainly help prepare us for the future, the impact the attack had was unprecedented.

WannaCry, however, is just a case study that reminds us that instead of brooding over past mistakes, we need to learn from our experiences to ensure we are ready for the next challenge – be that in the online or physical world. The aftermath of such events is the perfect opportunity for organisations of all sizes to build resilience and a plan of action, based on real life learnings.

EVERYDAY IT USERS WEREN’T ENGAGED ENOUGH

In an earlier post this year I talked about the shift of blame around the WannaCry attack. Assigning accountability in such incidents is not uncommon, with individuals often being considered the “weakest link” in cyber security. Ian Levy and Darren Thomson, CTO & Vice President, Technology Services at Symantec, however, argued that’s not necessarily the case.

Be that as it may, people will still be a key part of the tech-led approach to beating cybercrime. Darren Thomson, CTO & Vice President, Technology Services at Symantec added: “Technology alone will not be enough for us to defeat the attacks that we have already started to see across the planet.”

“Non-trained, non-expert users of IT and that means essentially everybody, both in their private lives and their business, are the secret sauce when it comes to the solution we’re going to need to develop”, said Thomson.

Perhaps as a New Year’s resolution we should try to remember that humans are after all the key to resolving many challenges. And it’s not just about educating and training people to follow certain safety procedures and business execution plans. As Darren Thomson advised, “we’ve got to change that engagement model and that has got as much to do with psychology as it has to do with the technical resources we’re putting into this”.

WE FEARED ARTIFICIAL INTELLIGENCE INSTEAD OF HACKERS

“Real criminals aren’t out to prove how clever they are, they’re out to make money,” suggested Peter Wood, Chief Executive Officer at First Base Technologies. And since the past year saw Artificial Intelligence becoming a reality quicker than anticipated, odds are it will soon attract cyber criminals’ appetite in exploiting this new technology even before it has fully matured.

“Businesses are already going beyond what automation originally was, and maybe heading to AI without knowing how to switch it off”, said Graeme Hackland, CIO at Williams F1 Team.

This could have grave consequences, specifically in situations where AI is being applied to make a data-driven decision on behalf of a human because as noted by Graeme Hackland, “We’re going into the whole AI thing believing the AI will be right more often than we are.”

Given how worried many have been about the advancements in AI development in recent months, as a society we asked the wrong questions about how AI will impact computing and work instead of raising the important question Hackland brought up: “But what if somebody got to that AI and changed the data?” It’s a question the industry must be able to answer before it goes all in on AI because AI should always be the solution, not the problem.