
表示中 73 - 84 / 298 リソース
Tensor Steganography and AI Cybersecurity
Tensor steganography exploits two key characteristics of deep learning models: the massive number of parameters (weights) in neural networks and the inherent imprecision of floating-point numbers. Learn about this novel technique that combines traditional steganography principles with deep-learning model structures.
Malware in LLM Python Package Supply Chains
The gptplus and claudeai-eng supply chain attack represents a sophisticated malware campaign that remained active and undetected on PyPI for an extended period. These malicious packages posed as legitimate tools for interacting with popular AI language models (ChatGPT and Claude) while secretly executing data exfiltration and system compromise operations.
Path Traversal Vulnerability in Deep Java Library (DJL) and Its Impact on Java AI Development
A newly discovered path traversal vulnerability (CVE-2025-0851) in Deep Java Library (DJL) could allow attackers to manipulate file paths, exposing Java AI applications to security risks. Learn how this flaw impacts DJL users and how updating to version 0.31.1 mitigates the threat.
Security Risks with Python Package Naming Convention: Typosquatting and Beyond
Beware of typosquatting and misleading Python package names—one small mistake in pip install can expose your system to backdoors, trojans, and malicious code. Learn how attackers exploit package naming conventions and discover best practices to secure your open-source supply chain.
Can Machine Learning Find Path Traversal Vulnerabilities in Go? Snyk Code Can!
Explore how Snyk’s machine learning-powered security tools tackle path traversal vulnerabilities in Golang code. Learn how to secure your Go applications and challenge yourself to detect and exploit vulnerabilities like a pro!