Just under 45% of organizations conduct regular audits and assessments to ensure their cloud environment is secure, which is “concerning” as more applications and workloads move to multi-cloud platforms.
When asked how they monitor risk in their cloud infrastructure, 47.7% of companies pointed to automated security tools, while 46.5% relied on native security offerings from their vendors. Another 44.7% said they performed regular audits and assessments, according to a report from security provider Bitdefender.
Also: AI is changing cybersecurity and companies must become aware of the threat
About 42.1% worked with external experts, revealed the study, which surveyed more than 1,200 IT and security professionals, including chief information security officers, in six markets: Singapore, the United Kingdom, France, Germany, Italy and USA.
It is “definitely concerning” that only 45% of companies perform regular audits of their cloud environments, said Paul Hadjy, vice president of Asia-Pacific and cybersecurity services at Bitdefender, in response to questions from ZDNET.
Hadjy noted that an overreliance on cloud providers’ ability to protect hosted services or data persists, even as enterprises continue to move applications and workloads to multi-cloud environments.
“Most of the time, [cloud providers] “They are not as responsible as you might think and, in most cases, the data that is stored in the cloud is large and often sensitive,” Hadjy said.
“He cloud security responsibilityincluding how data is protected at rest or in motion, identities [of] People, servers, and endpoints are granted access to resources, and compliance is predominantly up to the client. “It is important to first establish a baseline to determine the current risk and vulnerability in your cloud environments based on factors such as geography, industry, and supply chain partners.”
Among the top security concerns respondents had in managing their company’s cloud environments, 38.7% cited identity and access management, while 38% noted the need to maintain cloud compliance . Another 35.9% cited shadow IT as a concern and 32% were concerned about human error, according to the study.
However, when it comes to AI-related generative threats, respondents appear to trust their teammates’ ability to identify potential attacks. A majority of 74.1% believed that colleagues in their department would be able to detect a deepfake video or audio attack, with US respondents showing the highest level of confidence at 85.5%.
Also: Code faster with generative AI, but beware of the risks doing so
In comparison, only 48.5% of their counterparts in Singapore were confident that their teammates could detect a deepfake, the lowest among the six markets. In fact, 35% in Singapore said colleagues in their department would not be able to identify a deepfake, which was the highest percentage in the global group who said the same.
Was the global average of 74.1% who trusted their teammates to detect a deepfake misplaced or well-placed?
Hadjy noted that this confidence was expressed even though 96.6% viewed GenAI as a minor to very significant threat. A basic explanation for this is that IT and security professionals don’t necessarily trust the ability of users beyond their own teams (and who aren’t in IT or security) to detect deepfakes, she said.
“That is why we believe that technology and processes [implemented] Together they are the best way to mitigate this risk,” he added.
When asked how effective or accurate existing tools are at detecting AI-generated content such as deepfakes, he said this would depend on several factors. Whether sent via a phishing email or embedded in a text message with a malicious link, deepfakes should be quickly identified using endpoint protection tools, such as XDR (extended detection and response) tools, she explained.
However, he noted that threat actors depend on natural human tendencies to believe what they see and what is endorsed by people they trust, such as celebrities and high-profile personalities, whose images are often manipulated to convey messages.
Also: Three ways to accelerate generative AI deployment and optimization
And as deepfake technologies continue to evolve, he said it would be “almost impossible” to detect such content by sight or sound alone. He stressed the need for technology and processes that can detect deepfakes to also evolve.
Although Singapore respondents were the most skeptical about their teammates’ ability to detect deepfakes, he noted that 48.5% is a significant number.
Hadjy again stressed the importance of having technology and processes in place, saying: “Deepfakes will continue to improve, and detecting them effectively will require ongoing efforts that combine people, technology and processes working together. In cybersecurity, there is no ‘silver bullet’: “It’s always a multi-layered strategy that starts with strong prevention to close the door before a threat enters.”
Training is also increasingly critical as more employees work in hybrid environments and more risks originate in homes. “Companies should implement clear measures to validate deepfakes and protect against highly targeted phishing campaigns,” she said. “Processes are key for organizations to help ensure double-checking measures are in place, especially in cases where large sums of money transfers are involved.”
According to the Bitdefender study, 36.1% see GenAI technology as a very important threat when it comes to the manipulation or creation of misleading content, such as deepfakes. Another 45.1% described this as a moderate threat while 15.4% said it was a minor threat.
Also: Almost 50% of people want an AI clone to do this for them.
A large majority, 94.3%, were confident in their organization’s ability to respond to current security threats, such as Data ransom, identity fraudand zero-day attacks.
However, 57% admitted to having experienced a data breach or leak in the past year, up 6% from the previous year, the study revealed. This figure was lowest in Singapore at 33% and highest in the United Kingdom at 73.5%.
Phishing and social engineering were the top concerns at 38.5%, followed by ransomware, insider threats, and software vulnerabilities at 33.5% each.