Google announced this week that it has decided to shut down its Google+ social network. The announcement also revealed the existence of an API bug that exposed personal information from as many as 500,000 accounts.
According to Google, the flaw gave hundreds of third-party apps access to user information such as name, email address, occupation, gender and age. However, the Internet giant said it had found no evidence of abuse.
Google discovered the bug in March 2018, but waited until now to disclose it, which has raised a lot of questions. The Wall Street Journal reported that Google executives decided not to notify users earlier due to concerns it would attract the attention of regulators and draw comparisons to the Cambridge Analytica data privacy scandal that hit Facebook.
Industry professionals have commented on various aspects of the story, including the vulnerability, legal implications, impact on Google, and how APIs can be secured.
And the feedback begins…
Paul Bischoff, Comparitech:
“In my view, Google is basically pleading ignorance in order to shield itself from legal ramifications. It has conveniently left out some crucial figures in its response that would give us a more clear picture of the scope of this incident. For example, Google says 438 applications had unauthorized access to Google+ profile data, but it doesn’t say how many of its users used those apps. And while Google says it performed a cursory investigation and found nothing suspicious, it also notes that it didn’t actually contact or audit any of the developers of those apps.
As popular and high-profile as Google is, and due to the fact that this vulnerability existed for the better part of three years, it would be reasonable to assume the number of occurrences in which Google+ data was obtained and misused is non-zero.
Although there’s no federal breach notification law in the US, every state now has its own breach notification law. However, these laws only apply when it’s clear that data was obtained by an unauthorized third party. By turning a blind eye as to whether this occurred and only acknowledging that a vulnerability existed, Google can plead ignorance.”
Ilia Kolochenko, CEO, High-Tech Bridge:
“Unlike the recent Facebook breach, this disclosure timeline is incomprehensibly long and will likely provoke a lot of questions from regulatory authorities. Inability to assess and quantify the users impacted does not exempt from disclosure. Although, a security vulnerability per se does not automatically trigger the disclosure duty, in this case it seems that Google has some reasonable doubts that the flaw could have been exploited. Further clarification from Google and technical details of the incident would certainly be helpful to restore confidence and trust among its users currently abandoned in darkness.
Technically speaking, this is one more colourful example that bug bounty is no silver bullet even with the highest payouts by Google. Application security is a multi-layered approach process that requires continuous improvement and adaptation for new risks and threats. Such vulnerabilities usually require a considerable amount of efforts to be detected, especially if it (re)appears on a system that has been already tested. Continuous and incremental security monitoring is vital to maintain modern web systems secure.”
Matt Chiodi, VP of Cloud Security, RedLock:
“Given Google’s largely stellar reputation, I am shocked that they would purposefully choose to not disclose this incident. We have learned from similar situations that consumers possess a strong ability to forgive when companies take immediate and demonstrable steps to ensure their mistakes are not repeated. Think about J&J with the Tylenol scandal in the 1980s. Because of their swift response, J&J remains one of the most trusted brands. Google could lose a great deal of respect and ultimately revenue if this report is true.”
Bobby S, Red Team, ThinkMarble:
“The fact that Google chose to shut Google+ down on discovering this breach is telling of how serious it is. It appears that a bug in the API for Google+ had been allowing third-party app developers to access the data not just of users who had granted permission, but of their friends. The vast majority of social media platforms that we use every day monetise our data by making it available to 3rd parties via an API, but it is not acceptable that exploitative practices continue.
This has echoes of the Cambridge Analytica scandal that hit Facebook and has led to much greater scrutiny of Facebook’s policies and openness towards how data is accessed, used and shared. Similarly, Google must seriously consider how it continues to operate alongside third-party developers. This is especially relevant now that the GDPR is in force, affecting any company with users in the EU.
As a data controller, under Article 32 of the GDPR, Google now has greater obligations to ensure that its data-processors (including third-party app developers) implement measures to both ensure the security of personal data, but also gain the proper permissions from individual users to access it. In wake of this new regulation, these same companies also now hold a legal requirement to take appropriate actions to secure and pseudonymize this data before making it available through their services.”
Pravin Kothari, CEO, CipherCloud:
“Google’s unofficial motto has long been ‘don’t be evil.’ Alphabet, the Google parent company, adapted this to ‘do the right thing.’
Google’s failure, if true, to not disclose to users the discovery of a bug that gave outside developers access to private data, is a reoccurring theme. We saw recently that Uber was fined for failing to disclose the fact that they had a breach, and instead of disclosing, tried to sweep it under the rug.
It’s not surprising that companies that rely on user data are incented to avoid disclosing to the public that their data may have been compromised, which would impact consumer trust. These are the reasons that the government should and will continue to use in their inexorable march to a unified national data privacy omnibus regulation.
Trust and the cloud do not go together until responsibility is taken for locking down and securing our own data. Even if your cloud offers the ability to enforce data protection and threat protection, it is not their data that is compromised and potentially used against them, it is the consumers.
Enterprises leveraging cloud services need to ensure additional security measures and data is protected before it is delivered to a third-party cloud service – this is the only way we can ensure data is protected.”
Colin Bastable, CEO, Lucy Security:
“Don’t be Evil mutated into Don’t be Caught. Google’s understandable desire to hide their embarrassment from regulators and users is the reason why states and the feds impose disclosure requirements – the knock-on effects of security breaches are immense.
The risk of such a security issue is shared by all of the Google users’ employers, banks, spouses, colleagues, etc. But I guess we can trust them when we are told there was no problem.”
Etienne Greeff, CTO and co-founder, SecureData:
The news today that Google covered up a significant data breach, affecting up to 500,000 Google+ users, is unfortunately unsurprising. It’s a textbook example of the unintended consequences of regulation – in forcing companies to comply with tough new security rules, businesses hide breaches and hacks out of fear of being the one company caught in the spotlight.
Google didn’t come clean on the compromise, because they were worried about regulatory consequences. While the tech giant went beyond its “legal requirement in determining whether to provide notice,” it appears that regulation like GDPR is not enough of a deterrent for companies to take the safety of customer data seriously. And so this type of event keeps on happening. While Google has since laid out what it intends to do about the breach in support of affected users, this doesn’t negate the fact that the breach – which happened in March – was ultimately covered up.
However, there are events that are happening far closer to home that aren’t getting the attention they deserve. We seem to pay more attention to the big tech breaches, when businesses such as the supermarket chain Morrisons is undergoing a class action lawsuit against them, for failing to protect deliberately leaked employee data. Last year the High Court ruled that the supermarket was what they termed “vicariously liable” as the Internal Auditor in question was acting in the course of his own employment at the company when he leaked that information online. The implications of this type of action are huge – if businesses can be held accountable for the actions of rogue employees acting criminally, then we will have to treat all our employees as malicious threat actors – which is a huge thing to consider and could have momentous repercussions across the globe in all industries.
Until then, we will undoubtedly see even more of this ‘head-in-the-sand’ practice in the future, especially given GDPR is now in force from larger tech firms. It ultimately gives hackers another way of monetising compromises – just like we saw in the case of Uber. This is dangerous practice, and changes need to be made across the technology industry to make it a safer place for all. Currently, business seems to care far more about covering its own back than the compromise of customer data. It’s a fine line to walk.”
Bryan Becker, application security researcher, WhiteHat Security:
“Even giants can have security flaws. I’m sure the offices of Facebook breathed a collective sigh of relief today, as they’re pushed out of the headlines by a new privacy breach at competitor Google.
Breaches like this illustrate the importance of continuous testing and active threat modeling, as well as the attention that APIs require for secure development and least information/privilege principles. Companies like Google grow large and fast, and can have a problem keeping every exposed endpoint under scrutiny. No one person can possibly be aware of every use or permutation of a single piece of code or API, or microservice.
For organizations that already have a large architecture, knowing where and how to start evaluating security can be a challenge in and of itself. In these cases, organizations can benefit from active threat modeling – basically a mapping of all front-end services to any other services they talk to (both backend and frontend), often drawn as a flow-chart type of diagram. With this mapping, admins can visualize what services are public facing (as in, need to be secured and tested), as well as what is at risk if those services get compromised. In some ways, this is the first step to taking ‘inventory’ in the infosec world.
Once the landscape is mapped out, automated testing can take a large portion of the strain by continuously scanning various services – even after they become old. Of course, automated testing is not a be-all/end-all solution, but it does carry the benefit that old or unused-but-not-yet-retired services continue to have visibility by the security team, even after most of the engineering team is no longer paying attention or has moved onto more interesting projects.”
Jessica Ortega, website security analyst, SiteLock:
“Google announced that it will be shutting down its controversial social media network Google+ over the next ten months in the wake of a security flaw. This flaw allowed more than 400 apps using the Google+ API to access the personal information of approximately 500,000 users. The flaw was discovered in March, but Google opted not to disclose this vulnerability as it found no evidence that the information had been misused. Additionally, the decision not to disclose the discovered vulnerability speaks to a fear of reputational damage and possible legal ramifications or litigation in light of recent Senate hearings and GDPR.
This type of behavior may become more common among tech companies aiming to protect their reputation in the wake of legislation and privacy laws–they may choose not to disclose vulnerabilities that they are not legally required to report in order to avoid scrutiny or fines. Ultimately it will be up to users to proactively monitor how their data is used and what applications have access to that data by using strong passwords and carefully reviewing access requests prior to using an app like Google+.”
Rusty Carter, VP of Product Management, Arxan Technologies:
“This shows yet again that “free” is anything but free. The cost of many of these services is your privacy and your data. In this case, the situation is even worse. Negligence led to more data exposed than intended, and – as the Wall Street Journal reported – Google did not notify users for months about this issue due to fear of disclosure.
While regional legislation may certainly impact how this proceeds, it is clear that consumer awareness of security is increasing quickly and the long term success of businesses will be heavily dependent on their reputation and consumers trust that they are securing and protecting their private and personal information.”
Kevin Whelan, CTO, ITC Secure:
“From a security standpoint, this again highlights the risks of how personal data can be accessed by third parties – in this case names, email, addresses, ages, occupations and relationship status were accessible through an open API.
From a business standpoint, it’s also a blow as they have had to close the social network, albeit the average touch time was five seconds and was deemed to be unpopular compared to platforms such as Facebook and Twitter. This bug has been around for a long time, so whilst there’s no evidence that data has been misused, it will require forensic investigation. What’s also surprising here is that Google say that they don’t keep logs for more than two weeks so aren’t able see what data had been accessed.”
Brian Vecci, Technical Evangelist, Varonis:
“This is a breach almost everyone can relate to, because everyone has a Google account and between emails, calendars, documents and other files, lots of people keep a ton of really valuable data in their Google account — so unauthorised access could be really damaging. On top of that, when you get access to someone’s primary email—which for many people is Gmail, you’ve got the keys to their online life. Not only do you have their login, which is almost always their email, you have the ability to reset any password since password reset links are sent via email. A Gmail breach could be the most damaging breach imaginable for the most number of people the longer it goes undetected. If Google knew about a potential breach and didn’t report it, that’s a huge red flag.
Unlike many other types of accounts, Google serves for many users as the authentication for other apps like Facebook. Last week, Facebook said they had no evidence that linked apps were accessed. But if these linked apps were accessed due to a breach, it could expose all kinds of personal user data. If you’re using Google or Facebook to login to other apps, there is a whole web of information that could be exposed. Breaches like these are the reason why Google, Facebook and other big tech players need to be regulated – they are a gateway to other applications for business and personal use.”