How (and why) federated learning enhances cybersecurity

by CryptoExpert
Ledger


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

Each year, cyberattacks become more frequent and data breaches become more expensive. Whether companies seek to protect their AI system during development or use their algorithm to improve their security posture, they must alleviate cybersecurity risks. Federated learning might be able to do both.

What is federated learning?

Federated learning is an approach to AI development in which multiple parties train a single model separately. Each downloads the current primary algorithm from a central cloud server. They train their configuration independently on local servers, uploading it upon completion. This way, they can share data remotely without exposing raw data or model parameters.

The centralized algorithm weighs the number of samples it receives from each disparately trained configuration, aggregating them to create a single global model. All information remains on each participant’s local servers or devices — the centralized repository weighs the updates instead of processing raw data.

Tokenmetrics

Federated learning’s popularity is rapidly increasing because it addresses common development-related security concerns. It is also highly sought after for its performance advantages. Research shows this technique can improve an image classification model’s accuracy by up to 20% — a substantial increase.

Horizontal federated learning

There are two types of federated learning. The conventional option is horizontal federated learning. In this approach, data is partitioned across various devices. The datasets share feature spaces but have different samples. This enables edge nodes to collaboratively train a machine learning (ML) model without sharing information.

Vertical federated learning

In vertical federated learning, the opposite is true — features differ, but samples are the same. Features are distributed vertically across participants, each possessing different attributes about the same set of entities. Since just one party has access to the complete set of sample labels, this approach preserves privacy. 

How federated learning strengthens cybersecurity

Traditional development is prone to security gaps. Although algorithms must have expansive, relevant datasets to maintain accuracy, involving multiple departments or vendors creates openings for threat actors. They can exploit the lack of visibility and broad attack surface to inject bias, conduct prompt engineering or exfiltrate sensitive training data.

When algorithms are deployed in cybersecurity roles, their performance can affect an organization’s security posture. Research shows that model accuracy can suddenly diminish when processing new data. Although AI systems may appear accurate, they may fail when tested elsewhere because they learned to take bogus shortcuts to produce convincing results.

Since AI cannot think critically or genuinely consider context, its accuracy diminishes over time. Even though ML models evolve as they absorb new information, their performance will stagnate if their decision-making skills are based on shortcuts. This is where federated learning comes in.

Other notable benefits of training a centralized model via disparate updates include privacy and security. Since every participant works independently, no one has to share proprietary or sensitive information to progress training. Moreover, the fewer data transfers there are, the lower the risk of a man-in-the-middle attack (MITM).

All updates are encrypted for secure aggregation. Multi-party computation hides them behind various encryption schemes, lowering the chances of a breach or MITM attack. Doing so enhances collaboration while minimizing risk, ultimately improving security posture.

One overlooked advantage of federated learning is speed. It has a much lower latency than its centralized counterpart. Since training happens locally instead of on a central server, the algorithm can detect, classify and respond to threats much faster. Minimal delays and rapid data transmissions enable cybersecurity professionals to handle bad actors with ease.

Considerations for cybersecurity professionals

Before leveraging this training technique, AI engineers and cybersecurity teams should consider several technical, security and operational factors.

Resource usage

AI development is expensive. Teams building their own model should expect to spend anywhere from $5 million to $200 million upfront, and upwards of $5 million annually for upkeep. The financial commitment is significant even with costs spread out among multiple parties. Business leaders should account for cloud and edge computing costs.

Federated learning is also computationally intensive, which may introduce bandwidth, storage space or computing limitations. While the cloud enables on-demand scalability, cybersecurity teams risk vendor lock-in if they are not careful. Strategic hardware and vendor selection is of the utmost importance.

Participant trust

While disparate training is secure, it lacks transparency, making intentional bias and malicious injection a concern. A consensus mechanism is essential for approving model updates before the centralized algorithm aggregates them. This way, they can minimize threat risk without sacrificing confidentiality or exposing sensitive information.

Training data security

While this machine learning training technique can improve a firm’s security posture, there is no such thing as 100% secure. Developing a model in the cloud comes with the risk of insider threats, human error and data loss. Redundancy is key. Teams should create backups to prevent disruption and roll back updates, if necessary. 

Decision-makers should revisit their training datasets’ sources. In ML communities, heavy borrowing of datasets occurs, raising well-founded concerns about model misalignment. On Papers With Code, more than 50% of task communities use borrowed datasets at least 57.8% of the time. Moreover, 50% of the datasets there come from just 12 universities.

Applications of federated learning in cybersecurity

Once the primary algorithm aggregates and weighs participants’ updates, it can be reshared for whatever application it was trained for. Cybersecurity teams can use it for threat detection. The advantage here is twofold — while threat actors are left guessing since they cannot easily exfiltrate data, professionals pool insights for highly accurate output.

Federated learning is ideal for adjacent applications like threat classification or indicator of compromise detection. The AI’s large dataset size and extensive training build its knowledge base, curating expansive expertise. Cybersecurity professionals can use the model as a unified defense mechanism to protect broad attack surfaces.

ML models — especially those that make predictions — are prone to drift over time as concepts evolve or variables become less relevant. With federated learning, teams could periodically update their model with varied features or data samples, resulting in more accurate, timely insights.

Leveraging federated learning for cybersecurity

Whether companies want to secure their training dataset or leverage AI for threat detection, they should consider using federated learning. This technique could improve accuracy and performance and strengthen their security posture as long as they strategically navigate potential insider threats or breach risks.

 Zac Amos is the features editor at ReHack.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers



Source link

You may also like