[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["필요한 정보가 없음","missingTheInformationINeed","thumb-down"],["너무 복잡함/단계 수가 너무 많음","tooComplicatedTooManySteps","thumb-down"],["오래됨","outOfDate","thumb-down"],["번역 문제","translationIssue","thumb-down"],["샘플/코드 문제","samplesCodeIssue","thumb-down"],["기타","otherDown","thumb-down"]],[],[],[],null,["Deploy AI across mobile, web, and embedded applications\n\n - \n\n On device\n\n Reduce latency. Work offline. Keep your data local \\& private.\n- \n - \n\n Cross-platform\n\n Run the same model across Android, iOS, web, and embedded.\n- \n - \n\n Multi-framework\n\n Compatible with JAX, Keras, PyTorch, and TensorFlow models.\n- \n - \n\n Full AI edge stack\n\n Flexible frameworks, turnkey solutions, hardware accelerators\n\nReady-made solutions and flexible frameworks \n\nLow-code APIs for common AI tasks\n\nCross-platform APIs to tackle common generative AI, vision, text, and audio tasks.\n[Get started with MediaPipe tasks](https://ai.google.dev/edge/mediapipe/solutions/guide) \n\nDeploy custom models cross-platform\n\nPerformantly run JAX, Keras, PyTorch, and TensorFlow models on Android, iOS, web, and embedded devices, optimized for traditional ML and generative AI.\n[Get started with LiteRT](https://ai.google.dev/edge/litert) \n\nShorten development cycles with visualization\n\nVisualize your model's transformation through conversion and quantization. Debug hotspots by\noverlaying benchmarks results.\n[Get started with Model Explorer](https://ai.google.dev/edge/model-explorer) \n\nBuild custom pipelines for complex ML features\n\nBuild your own task by performantly chaining multiple ML models along with pre and post processing\nlogic. Run accelerated (GPU \\& NPU) pipelines without blocking on the CPU.\n[Get started with MediaPipe Framework](https://ai.google.dev/edge/mediapipe/framework) \n\nThe tools and frameworks that power Google's apps \nExplore the full AI edge stack, with products at every level --- from low-code APIs down to hardware specific acceleration libraries. \n\nMediaPipe Tasks \nQuickly build AI features into mobile and web apps using low-code APIs for common tasks spanning generative AI, computer vision, text, and audio. \nGenerative AI\n\nIntegrate generative language and image models directly into your apps with ready-to-use APIs. \nVision\n\nExplore a large range of vision tasks spanning segmentation, classification, detection, recognition, and body landmarks. \nText \\& audio\n\nClassify text and audio across many categories including language, sentiment, and your own custom categories. \nGet started \n[Tasks documentation\nFind all of our ready-made low-code MediaPipe Tasks with documentation and code samples.](https://ai.google.dev/edge/mediapipe/solutions/guide) \n[Generative AI tasks\nRun LLMs and diffusion models on the edge with our MediaPipe generative AI tasks.](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference) \n[Try demos\nExplore our library of MediaPipe Tasks and try them yourself.](https://goo.gle/mediapipe-studio) \n[Model maker documentation\nCustomize the models in our MediaPipe Tasks with your own data.](https://ai.google.dev/edge/mediapipe/solutions/model_maker) \n\nMediaPipe Framework \nA low level framework used to build high performance accelerated ML pipelines, often including multiple ML models combined with pre and post processing. \n[Get started](https://ai.google.dev/edge/mediapipe/framework) \n\nLiteRT \nDeploy AI models authored in any framework across mobile, web, and microcontrollers with optimized hardware specific acceleration. \nMulti-framework\n\nConvert models from JAX, Keras, PyTorch, and TensorFlow to run on the edge. \nCross-platform\n\nRun the same exact model on Android, iOS, web, and microcontrollers with native SDKs. \nLightweight \\& fast\n\nLiteRT's efficient runtime takes up only a few megabytes and enables model acceleration across CPU, GPU, and NPUs. \nGet started \n[Pick a model\nPick a new model, retrain an existing one, or bring your own.](https://ai.google.dev/edge/litert/models/trained) \n[Convert\nConvert your JAX, Keras, PyTorch, or Tensorflow model into an optimized LiteRT model.](https://ai.google.dev/edge/litert/models/convert_to_flatbuffer) \n[Deploy\nRun a LiteRT model on Android, iOS, web, and microcontrollers.](https://ai.google.dev/edge/litert#integrate-model) \n[Quantize\nCompress your model to reduce latency, size, and peak memory.](https://ai.google.dev/edge/litert/models/model_optimization) \n\nModel Explorer \nVisually explore, debug, and compare your models. Overlay performance benchmarks and numerics to pinpoint troublesome hotspots. \n[Get started](https://ai.google.dev/edge/model-explorer) \n\nGemini Nano in Android \\& Chrome \nBuild generative AI experiences using Google's most powerful, on-device model \n[Learn more about Android AICore](https://developer.android.com/ai/aicore) [Learn more about Chrome Built-In AI](https://developer.chrome.com/docs/ai) \n\nRecent videos and blog posts \n[A walkthrough for Android's on-device GenAI solutions\n1 October 2024](https://www.youtube.com/watch?v=EpKghZYqVW4) \n[How to bring your AI Model to Android devices\n2 October 2024](https://android-developers.googleblog.com/2024/10/bring-your-ai-model-to-android-devices.html) \n[Gemini Nano is now available on Android via experimental access\n1 October 2024](https://android-developers.googleblog.com/2024/10/gemini-nano-experimental-access-available-on-android.html) \n[TensorFlow Lite is now LiteRT\n4 September 2024](https://developers.googleblog.com/en/tensorflow-lite-is-now-litert)"]]