Cloud
Trellix Announces Expanded Support for Amazon Security Lake From AWS

Trellix has announced expanded support for Amazon Security Lake from Amazon Web Services (AWS), a service automatically centralizing security data from the cloud, on-premises, and custom sources into a purpose-built data lake. This offering is designed to enable simpler and faster delivery of Trellix XDR solutions along with increased protection of workloads, applications, and data for AWS customers.
Trellix’s expanded support for Amazon Security Lake allows AWS customers to integrate their security data lake into the Trellix XDR security operations platform while using the Open Cybersecurity Schema Framework (OCSF) open standard. Amazon Security Lake is a service that automatically centralizes an organization’s security data from across their AWS environments, leading SaaS providers, on-premises, and cloud sources, into a purpose-built data lake so customers can act on security data faster and simplify security data management across hybrid and multicloud environments. In addition, the OCSF schema enables Trellix customers to combine hundreds of data sources with Amazon Security Lake data. As a result, AWS and Trellix customers can seamlessly apply Trellix machine learning (ML), threat intelligence, and predictive analytics to gain important insights that allow for deeper detection and faster threat mitigation.
“The amount of data available to any enterprise today is staggering,” said Britt Norwood, Senior Vice President, Global Channels & Commercial at Trellix. “Without a way to centralize the management and storage of that data, it’s difficult for customers to glean the insights needed to keep data safe. Our integration with Amazon Security Lake provides customers with more centralized visibility and quick resolution of their security issues.”
“With security at the forefront, we are relentlessly focused on innovating to deliver new ways to help customers secure their entire enterprise,” said Rod Wallace, General Manager for Amazon Security Lake at AWS. “Customers who leverage Amazon Security Lake and Trellix can collect a wide spectrum of security logs and findings from AWS, Trellix, and third-party sources in Amazon Security Lake and send them to Trellix for advanced analytics and incident response.”
- Trellix for Amazon Security Lake: Through newly combined capabilities, customers can share security events across Trellix XDR and their Amazon Security Lake, getting complete detection and response capabilities for their AWS environments. By consolidating their security alerts into Amazon Security Lake using OCSF, security teams can spend time protecting environments instead of performing the undifferentiated heavy lifting of managing their security data.
- Trellix and OCSF: Trellix is proud to be a contributing member of the open-source OCSF community that has built a framework promoting interoperability and data normalization between security products. Joining OCSF supports collaboration with other industry organizations, further benefiting customers and the broader cybersecurity community
Cloud
Google Clarifies the Cause of Missing Google Drive Files

Many Google Drive users recently experienced the unsettling disappearance of their files, prompting concerns. Google has now identified the root cause, attributing the issue specifically to the Google Drive for Desktop app. While assuring that only a limited subset of users is affected, the tech giant is actively investigating the matter and promises timely updates.
To prevent inadvertent file deletion, Google provides the following recommendations:
- Avoid clicking “Disconnect account” within Drive for desktop.
- Refrain from deleting or moving the app data folder, located at:
- Windows: %USERPROFILE%\AppData\Local\Google\DriveFS
- macOS: ~/Library/Application Support/Google/DriveFS
- Optionally, create a copy of the app data folder if there is sufficient space on your hard drive.
Before Google officially addressed the issue, distressed users took to the company’s support forum to report deleted files. One user from South Korea highlighted a particularly severe case where their account reverted to May 2023, resulting in the loss of anything uploaded or created after that date. Additionally, the user emphasised that they had not synced or shared their files or drive with anyone else.
As Google delves deeper into resolving this matter, affected users are advised to heed the provided precautions. The company’s commitment to ongoing updates reflects its dedication to swiftly addressing and rectifying the situation. The incident serves as a reminder of the importance of proactive measures to safeguard digital data, especially as users navigate cloud-based platforms such as Google Drive.
Cloud
Addressing Blind Spots in the Hybrid Cloud

Written by Mark Jow, EMEA Technical Evangelist, Gigamon
With the rapid growth of the hybrid cloud market, businesses are experiencing numerous benefits. According to a study by Amazon Web Services, cloud computing is projected to add almost $181 billion to the UAE’s economy by the year 2033. Further reports reveal that in 2021, the adoption of cloud computing in the UAE contributed an astounding 2.26% to the country’s GDP, uplifting the economic value to $9.5 billion.
However, security has emerged as a significant challenge. In a recent survey conducted by Gigamon, we found that 90 percent of IT and Security leaders across EMEA, APAC and the US have experienced a data breach in the last 18 months. We also uncovered that over 70 per cent of IT security leaders admit they allow encrypted data to flow freely across their IT infrastructure. It seems therefore that there’s an industry-wide lack of awareness about blind spots and the complexity and risks in maintaining security in hybrid cloud environments.
How to identify blind spots
Going back to the basics, blind spots are areas within a hybrid cloud infrastructure that are not adequately reached by traditional security and monitoring tools. These areas remain hidden from view, hindering effective data collection and analysis and therefore compromising security.
The good news is that IT and Security professionals are increasingly becoming aware of the importance of avoiding blind spots: our research uncovered that unexpected blind spots being exploited are a major concern to CISOs. To address this concern, CISOs and their teams are embracing deep observability to provide complete visibility across their entire infrastructure. This is achieved by harnessing immutable, precise and actionable network-derived intelligence to amplify the power of existing tools, eliminating blind spots both on-premise and in the cloud, and providing greater visibility and understanding of an organisation’s security posture and potential threats.
Encrypted traffic and limited visibility
Yet there is still work to be done. There’s a huge underestimation of blind spots and what these consist of, considering only 30 per cent of organisations have visibility into encrypted traffic. Moreover, 35 per cent of respondents reported limited visibility into containers, and less than half (48 per cent) had visibility of east-to-west traffic, which involves the lateral movement of data within the hybrid cloud infrastructure. These limitations further contribute to the existence of unobserved segments in the hybrid cloud.
The impact of unrecognised blind spots
As a result, nearly one-third of breaches go undetected by IT and Security professionals and their tools, as identified in the latest survey that included 1000 IT professionals across EMEA, the US, Australia and Singapore. The failure to recognise blind spots significantly hampers the ability to effectively protect sensitive data and respond to security incidents. While surface-level confidence appears high, with 94 per cent of global respondents to our survey believing their security tools provide complete visibility, it’s clear this perception is simply not the reality of hybrid cloud security.
The hybrid cloud is inherently complex, and traditional security and monitoring tools are often insufficient in addressing blind spots in this area. To effectively eliminate blind spots and narrow the perception vs. reality gap in hybrid cloud security, CISOs and their teams must actively prioritise deep observability. By leveraging actionable network-derived intelligence, businesses can amplify the power of existing security and observability tools and gain comprehensive visibility of their complete hybrid cloud estate.
Implementing deep observability will significantly accelerate progress in improving visibility into containers, east-west traffic and encrypted data to bolster security and totally eradicate the blind spots that are keeping today’s CISOs up at night.
Cloud
Five Ways to Maximise the Security, Performance and Reliability of Your Online Business

Written by Bashar Bashaireh, Managing Director, Middle East & Turkey, Cloudflare
With a shift to digital transformation, enterprises face new challenges and opportunities for growth — from anticipating and meeting customers’ digital needs to mounting a strong defence against web-based attacks, overcoming latency issues, preventing site outages, and maintaining network connectivity and performance. When optimizing the online customer experience, enterprises need to adopt a strategy that integrates robust site security, performance, and reliability. Although this strategy involves many components, here are five key considerations that can help businesses meet customer needs and provide a secure and seamless user experience:
Leverage DNS and DNSSEC support to maximize availability and uptime
Frequently referred to as the ‘phone book of the Internet,’ DNS (domain name system) translates domain names into numeric IP addresses and enables browsers to load Internet resources. As DNS attacks become more prevalent, businesses are starting to realize that a lack of resilient DNS creates a weak link in their overall security strategy.
There are multiple approaches that companies can take to deploy a resilient DNS strategy. They can get a managed DNS provider that hosts all DNS records, offers query resolution at multiple nodes globally, and provides integrated DNSSEC support. DNSSEC adds a layer of security to the domain name system by adding cryptographic signatures to existing DNS records.
Companies can also build additional redundancy by deploying a multi-DNS strategy — even if the primary DNS goes down, secondary DNS helps keep the applications online. Large enterprises that prefer to maintain their own DNS infrastructure can implement a DNS firewall in conjunction with a secondary DNS. This setup adds a security layer to the on-prem DNS infrastructure and helps ensure overall DNS redundancy.
Accelerate content delivery by routing traffic across the least-congested routes
Today, the majority of web traffic is served through Content Delivery Networks (CDNs), including traffic from major sites like Amazon and Facebook. A CDN is a geographically distributed group of servers that help provide fast delivery of Internet content to globally dispersed users and can also reduce bandwidth costs.
With servers in multiple locations around the globe, a CDN is able to distribute content closer to website visitors, and in doing so, reduce any inherent network latency and improve page load times. CDNs also serve static assets from cache across their network, reducing the number of requests being made to hosted web servers and resulting in lower bandwidth and hosting costs.
Minimize the risk of site outages by globally load-balancing traffic
Maximizing server resources and efficiency can be a delicate balancing act. Cloud-based load balancers distribute requests across multiple servers in order to handle spikes in traffic. The load balancing decision takes place at the network edge, closer to the users — allowing businesses to boost response time and effectively optimize their infrastructure while minimizing the risk of server failure.
Protect web applications from malicious attacks
When securing web applications and other business-critical properties, a layered security strategy can help defend against many different kinds of threats.
- Web application firewall protection – A web application firewall, or WAF, protects web applications by filtering and monitoring HTTP traffic. Cloud-based WAFs are typically the most flexible and cost-effective solution to implement, as they can be consistently updated to protect against new threats without significant additional work or cost on the user’s end.
- DDoS attack protection – A DDoS attack is a malicious attempt to overburden servers, devices, networks, or surrounding infrastructure with a flood of illegitimate Internet traffic. By consuming all available bandwidth between targeted devices and the Internet, these attacks not only cause significant service disruptions but have a tangible and negative impact on business as customers are unable to access a business’s resources.
- Malicious bot mitigation – Sites may become compromised when targeted by malicious bot activity, which can overwhelm web servers, skew analytics, prevent users from accessing webpages, steal user data, and compromise critical business functions. By implementing a bot management solution, businesses can distinguish between useful and harmful bot activity and prevent malicious behaviour from impacting user experience.
Keep your network up and running
- Protect your network infrastructure – It’s not enough to just protect web servers. Enterprises often have on-premise network infrastructure hosted in public or private data centres that needs protection from DDoS attacks, too. Many DDoS mitigation providers rely on one of two methods for stopping an attack: scrubbing centres or on-premise scanning and filtering via hardware boxes. The problem with both approaches is that they impose a latency penalty that can adversely affect a business. A better way to detect and mitigate DDoS attacks is to do so close to the source — at the network edge. By scanning traffic at the closest data centre in a global, distributed network, high service availability is assured, even during substantial DDoS attacks. This approach reduces the latency penalties that come from routing suspicious traffic to geographically distant scrubbing centres. It also leads to faster attack response times.
- Protect TCP/UDP applications – At the transport layer, attackers may target a business’s server resources by overwhelming all available ports on a server. These DDoS attacks can cause the server to respond slowly to legitimate requests — or not at all. Preventing attacks at the transport layer requires a security solution that can automatically detect attack patterns and block attack traffic.
In conclusion, creating a superior online experience requires the right security and performance strategy — one that not only enables enterprises to accelerate content delivery, but ensures network reliability and protects their web properties from site outages, data theft, and other critical attacks.
-
Cyber Security1 week ago
Databases Are the Black Boxes for Most Organisations
-
News1 week ago
Proofpoint Appoints Sumit Dhawan as Chief Executive Officer
-
Cyber Security1 week ago
Cybersecurity on a Budget: Affordable Cybersecurity Strategies for Small Businesses
-
Cyber Security1 week ago
ManageEngine Intros Enhanced SIEM with Dual-Layered System for Better Precision in Threat Detection
-
Cloud1 week ago
Google Clarifies the Cause of Missing Google Drive Files
-
Interviews3 days ago
COP28: AI Can Be Leveraged to Deliver Actionable Insights
-
Interviews3 days ago
COP28: Fortinet is Committed to Innovating for a Safer Internet
-
Expert Speak3 days ago
Don’t Brush It Off – Plan Your Incident Response Now