AI vision systems see differently than humans do. When platforms downscale uploads to save compute, the mathematical properties of interpolation algorithms create exploitable artifacts. In this presentation, we'll show how to craft images which use invisible pixel perturbations to reveal malicious prompts after downscaling, triggering unauthorized tool execution across Google Gemini, Vertex AI, Google Assistant, and Genspark. Beyond image downscaling, we'll explore the broader attack surface, including audio transformations, dithering algorithms, and other preprocessing steps that become prompt injection vectors. You'll learn to fingerprint vulnerable systems using test patterns that reveal specific downscaling implementations across AI libraries. We'll demo Anamorpher, our open-source tool for automated attack generation, with both Python APIs and visual interfaces, as well as examine practical mitigations from displaying actual processed images to implementing design patterns resistant to prompt injection, such as the action selector pattern.