translate, which is described below.
For highest translation quality, we recommend using our next-gen models. For details, please see here.
To learn more about context in DeepL API translations, we recommend this article.
For more detail about request body parameters, see the Request Body Descriptions section further down on the page.
We also provide a spec that is auto-generated from DeepL’s OpenAPI file. You can find it here.
- cURL
- HTTP Request
- Python
- PHP
- C#
- Node.js
- Java
https://api.deepl.com. If you’re an API Free user, remember to update your requests to use https://api-free.deepl.com instead.EN-US to EN-GB), please refer to this guide. Translating between variants of the same language will result in no change to the text.Request Body Descriptions
Note: Set to true (or 1 in url-encoded requests) to enable beta languages. Any request with the enable_beta_languages parameter enabled will use quality_optimized models. Requests combining enable_beta_languages: true and model_type: latency_optimized will be rejected. Beta languages do not support formality or glossaries.
context parameter makes it possible to include additional context that can influence a translation but is not translated itself. This additional context can potentially improve translation quality when translating short, low-context source texts such as product names on an e-commerce website, article headlines on a news website, or UI elements.For example:
- When translating a product name, you might pass the product description as context.
- When translating a news article headline, you might pass the first few sentences or a summary of the article as context.
For best results, we recommend sending a few complete sentences of context in the same language as the source text. There is no size limit for the context parameter itself, but the request body size limit of 128 KiB still applies to all text translation requests.
If you send a request with multiple text parameters, the context parameter will be applied to each one.
Characters included in the context parameter will not be counted toward billing (i.e., there is no additional cost for using the context parameter, and only characters sent in the text parameter(s) will be counted toward billing for text translation even when the context parameter is included in a request).
latency_optimized: Uses the lowest latency translation models available (usually classic models; default parameter value)quality_optimized: Uses the highest quality translation models available (currently next-gen models)prefer_quality_optimized: Same asquality_optimized, with fallback to classic models if no next-gen model is present (as of December 2025, all languages have next-gen models)
model_type parameter is set, the response includes a model_type_used field indicating which model was used.model_type to quality_optimized. Setting both model_type=latency_optimized and tag_handling_version=v2 will return an
error.tag_handling is not set to html, the default value is 1, meaning the engine splits on punctuation and on newlines.For text translations where tag_handling=html, the default value is nonewlines, meaning the engine splits on punctuation only, ignoring newlines.
The use of nonewlines as the default value for text translations where tag_handling=html is new behavior that was implemented in November 2022, when HTML handling was moved out of beta.
Possible values are:
0- no splitting at all; whole input is treated as one sentence1(default whentag_handlingis not set tohtml) - splits on punctuation and on newlinesnonewlines(default whentag_handling=html) - splits on punctuation only, ignoring newlines
For applications that send one sentence per text parameter, we recommend setting split_sentences to 0, in order to prevent the engine from splitting the sentence unintentionally.
Please note that newlines will split sentences when split_sentences=1. We recommend cleaning files so they don’t contain breaking sentences or setting the parameter split_sentences to nonewlines.
Please note that this value will be ignored when using next-gen models (model_type_used=quality_optimized) and a value of:
0will be used iftag_handlingis not enablednonewlineswill be used iftag_handlingis enabled
…as these settings yield the best quality.
The formatting aspects affected by this setting include:
- Punctuation at the beginning and end of the sentence
- Upper/lower case at the beginning of the sentence
DE (German), FR (French), IT (Italian), ES (Spanish), ES-419 (Latin American Spanish), NL (Dutch), PL (Polish), PT-BR and PT-PT (Portuguese), JA (Japanese), and RU (Russian). Learn more about the plain/polite feature for Japanese ↗️.Setting this parameter with a target language that does not support formality will fail, unless one of the prefer_… options are used. Possible options are:
default(default)more- for a more formal languageless- for a more informal languageprefer_more- for a more formal language if available, otherwise fallback to default formalityprefer_less- for a more informal language if available, otherwise fallback to default formality
source_lang parameter to be set and the language pair of the glossary has to match the language pair of the request.style_id parameter enabled will default to use the quality_optimized model type. Requests combining style_id and model_type: latency_optimized will be rejected.de, en, es, fr, it, ja, ko, zh or any variants of these languages.Any request with the custom_instructions parameter enabled will default to use the quality_optimized model type. Requests combining custom_instructions and model_type: latency_optimized will be rejected.true, the response will include an additional key-value pair with the key billed_characters and a value that is an integer showing the number of characters from the request that will be counted by DeepL for billing purposes.For example: “billed_characters”: 42
billed_characters in the API response by default, at which point it will be necessary to set show_billed_characters to false in order for an API response not to include billed_characters. We will notify users in advance of making this change.For requests sent as URL-encoded forms, boolean values should be specified as “1” or “0”.xml: Enable XML tag handling; see XML handling.html: Enable HTML tag handling; see HTML handling.
v2: Use the improved v2 algorithm.v1: Use the previous v1 algorithm.
outline_detection parameter to false and selecting the tags that should be considered structure tags. This will split sentences using the splitting_tags parameter.In the example below, we achieve the same results as the automatic engine by disabling automatic detection with outline_detection=false and setting the parameters manually to tag_handling=xml, split_sentences=nonewlines, and splitting_tags=par,title.
While this approach is slightly more complicated, it allows for greater control over the structure of the translation output.
About the model_type parameter
Themodel_type parameter lets a user specify if they would like to optimize for speed until a request is served (latency) or translation quality. This is done on a best-effort basis by DeepL, not every language pair and feature must behave differently depending on this parameter.
Currently, the DeepL translation API defaults to the classic models if no model_type is included in the requests, except for certain languages (e.g. Thai, which is next-gen only) or certain features (e.g. tag handling v2, which only allows quality_optimized).
Note that the API intentionally does not allow the user to specify a model, but rather their goal, so that the DeepL backend can choose the most suitable model for the user (this avoids users constantly having to update their code with new model versions and learning many different models name and their specifics).
As of December 2025, all source and target languages are supported by next-gen models. Please note that DeepL reserves the right to quietly change which model serves e.g. a model_type=quality_optimized request, as long as we think it is a net benefit to the user (e.g. no significant latency increase, but a quality increase, or a significant latency reduction with no or only very slight decrease in quality).
The /languages endpoint has not yet been updated to include information about model_type support, but we expect to make such a change in the future.
Multiple Sentences
The translation function will (by default) try to split the text into sentences before translating. Splitting normally works on punctuation marks (e.g. ”.” or ”;”), though you should not assume that every period will be handled as a sentence separator. This means that you can send multiple sentences as a value of the text parameter. The translation function will separate the sentences and return the whole translated paragraph. In some cases, the sentence splitting functionality may cause issues by splitting sentences where there is actually only one sentence. This is especially the case if you’re using special/uncommon character sequences which contain punctuation. In this case, you can disable sentence splitting altogether by setting the parameter split_sentences to 0. Please note that this will cause overlong sentences to be cut off, as the DeepL API cannot translate overly long sentences. In this case, you should split the sentences manually before submitting them for translation.- cURL
- HTTP Request
https://api.deepl.com. If you’re an API Free user, remember to update your requests to use https://api-free.deepl.com instead.Translating Large Volumes of Text
There are a few methods to translate larger volumes of text:- If your text is contiguous, you can submit whole paragraphs to be translated in one request and with one text parameter. Prior to translation, your text will be split into sentences and translated accordingly.
- The translate function can take several text parameters and will return translations of each such parameter separately in the same order as they are requested (see example below). Each of the parameter values may contain multiple sentences. Up to 50 texts can be sent for translation per request.
- You can make parallel requests by calling the translate function from several threads/processes.
- cURL
- HTTP Request
https://api.deepl.com. If you’re an API Free user, remember to update your requests to use https://api-free.deepl.com instead.