Design a site like this with
Get started

Página de inicio


Everybody benefits from accessible filmmaking



Accessible Filmmaking: why do we need it and who benefits from it?


In an increasingly multilingual and accessible world, a monolingual and non-inclusive approach to filmmaking is certain to leave behind huge swathes of audience – not only foreign audiences and people with disabilities, who require the production of additional soundtracks or subtitles, but also the viewers of the growing number of films that include more than one language in their original versions.

Current distribution strategies and exhibition platforms severely underestimate the audience that exists for accessible cinema. Over 50% of the revenue obtained by most current films comes from translated (dubbing, subtitling) and accessible versions (subtitling of language and sound, audio description [AD] of the image), yet only 0.01%-0.1% of the budget is spent on these additional versions. To compound matters further, these additional versions are usually produced with limited time or money, for little remuneration, and traditionally involving zero contact with the creative team.

This can result in a version of the film that is artistically compromised: the filmmaker’s aesthetic and tonal vision may be ruined by the use of large, brightly lit subtitles over a dimly lit and subdued scene; an inaccurate AD track may give scant narrative details, leading to plot points not being effectively established; worse still, it can even affect the representation of characters. The result may be a vastly inferior product that betrays the filmmaker’s original artistic intentions.

Despite being joined by a common art and a shared objective, filmmaking and translation/accessibility have unfortunately remained two separate professions – but historically this was not always the case. During the silent film era, the intertitles were considered a vital part of the medium’s storytelling – and therefore were part of the standard post-production process, and budgeted for accordingly. It was only as the medium moved into the “talkies” era that subtitling and dubbing were relegated to the distribution process.

Research into audiovisual translation spanning over two decades has shown that this relegation has had a negative impact on the way foreign audiences and people with disabilities consume and respond to films. In an effort to avoid these audiences experiencing an inferior product, Accessible Filmmaking encourages close collaboration between filmmakers and translators/media access experts.

The following guide is intended for filmmakers and other professionals within the film industry who wish to become accessible filmmakers. The approach is supported by both the EU and the UN, and has been tried and tested successfully in research, training and professional practice.

                     FILM TRANSLATION AND MEDIA


Any translation process carries risks and complications, such as the impossibility of translating word for word or the difficulty in conveying meaning and form faithfully. Audiovisual translation is particularly complex, as by combining image and sound the range of possible contexts is greatly expanded. This can lead to various obstacles for translators – just look at this virtually untranslatable clip.

There are five main types of audiovisual translation and accessibility:


Dubbing is the process of replacing the original dialogue track. The dialogue is translated into the target language and recorded by a new cast of actors.


Subtitling is a translation process that is limited to text form. Traditionally, it involves presenting text at the bottom of the screen – although some films experiment with this placing. This text has the responsibility of conveying dialogue, on-screen text (insert shots such as letters, web pages, inscriptions, etc) and relaying selective information from the soundtrack (such as song lyrics, or identifying the voices of any off-screen characters). 


Conventionally referred to as subtitles for the deaf and hard hearing (SDH), this type of subtitling is often produced for people with hearing loss, but it is also targeted to anyone else who does not have access to sound, including hearing viewers who need subtitles for linguistic, cognitive, age-related or contextual reasons (i.e. watching TV with no volume in a pub, a hospital or public means of transport). These subtitles are normally in the same language as the film (although they can involve translation) and include descriptions of non-verbal diegetic sounds (doorbells, phones) that may be needed in order to understand the narrative.


Sign language interpreting (SLI) is the transfer from an oral language into a sign language or vice versa, mostly simultaneously. Sign languages are full-fledged natural languages that allow communication through the visual and spatial channel (and tactile in the case of deafblind people). Sign languages make use of manual articulation to communicate, but also arm articulation, body positioning and facial expression. There is no universal sign language (although there is an international sign system, SI), each sign language emerges and develops naturally and is independent from the oral languages that are spoken in the same region.   

Although it is easy to assume that subtitles for the D/deaf and Hard of Hearing (SDH), also known as captioning, meet the need for accessibility to audiovisual content of all people with hearing impairment, this is not true. Firstly, sign languages are the first, native and preferred form of communication of many Deaf people. They are used naturally within Deaf communities and Deaf cultures but also by hearing friends, relatives and professionals who work with Deaf people. Sign languages are part of a culture and, as many oral languages, they are minority, minoritized and non-hegemonic languages that need strong linguistic actions and policies to promote their use and knowledge, as well as the knowledge of the culture they belong to. Secondly, not all D/deaf people are able to understand and enjoy SDH with the current conventions. Written language, and therefore SDH, is a graphical representation of an oral language. In this sense, people with less knowledge or access to oral languages, regardless of the reason why, will also have reduced access to written language and, therefore, to accessibility by means of SDH.


Audio Description is an additional audio commentary that describes the images and on-screen action for users who may not have access to them, including blind and partially sighted audiences.


Dubbing is the main form of translation in countries such as Spain, France, Italy and Germany. It is used all over the world as the preferred method to translate children’s films.

Dubbing is invisible – while viewers of subtitled films have access to both the translation and the original dialogue, viewers of dubbed films have no access to the original. This makes dubbing an effective tool for censorship and manipulation (providing the new dialogue track and the images/on-screen action are synchronised). Content that may offend specific cultures can be removed, as in these two examples below:

In this example, the dubbed version has a character confess to being sick – instead of revealing his homosexuality, as in the original.

In this example, the Spanish dub omits the character’s reference to having visited Francoist Spain.

Synchronies: In order to appear smooth and in synch with the images and on-screen action, the dubbing translation must be as long as the original one (with the leeway of one or two syllables). During close-ups it must contain as many bilabial and labio-dental consonants (m, p, b, f, v) as the original script. Therefore the more close-ups featured in a film, the more challenging it is to remain faithful to the original.

The language used in dubbing has often been described as contrived, stilted and unidiomatic – in short, it often lacks the sounds and expressions native to a natural speaker. This is commonly referred to as dubbese – a culture-specific register characterised by tense and emphatic pronunciation, along with the presence of recurrent, outdated and often formal translation solutions.

Three of the most difficult challenges for dubbing are accents (geographical, temporal or social), registers (formal, informal, colloquial, etc) and multiple languages, all of which are often standardised or eliminated in the dubbed film. Multilingualism can however be preserved with different strategies, such as combining dubbing and subtitles.

When a character sings this is not usually re-recorded for the dubbed version – instead it is subtitled, the original singing voice left intact – meaning viewers will hear two voices for the same character. This can potentially impact on the credibility of the film, or at least affect the suspension of disbelief.


Subtitling is the main form of translation in most countries around the world. It is used in cinemas, tv and on streaming platforms. It is the main form of access for viewers with hearing loss, learning disabilities, or hearing viewers in noisy environments.

Subtitling is visible – both the original (in audio form) and the translation (in text form) live alongside one another. As such, if a viewer understands both languages they can compare both versions and they are often aware of the inaccuracies.

Subtitles are meant to be viewed alongside images, which means they must abide by certain time and space requirements – for the simple fact that they need to be on screen long enough for the viewer to comfortably read, and they must not cramp the image itself. Generally speaking, a line of dialogue must not exceed 40 characters, there must not be more than 2 rows of lines and a minimum of 1 second should be permitted for every 15 characters.

Regardless of the target language, there are linguistic commonalities across all subtitling – a ‘language’ often referred to as subtitlese. Subtitlese tends to omit oral features, clean up grammatical and lexical errors, and neutralise accents and registers – especially colloquialisms. Different strategies may be applied to maintain multilingualism in subtitling, which is easier than in dubbing.

As shown by eye-tracking research, subtitles make us watch a film differently.

  • When we watch a film without subtitles:

We attempt to extract meaning from the image and our eyes move voluntarily as we take on different tasks. As a consequence, we miss what we are not specifically looking for (the so-called inattentional blindness).

Our eyes also move involuntarily – we think we are free to look where we want, but we are inadvertently guided across the screen by the cinematic tools employed by the filmmaker (known as ‘illusion of volition’). This brings our gaze together with that of other viewers (known as ‘attentional synchrony’) – an average shot has us focusing on a mere 3.8% of the screen.

  • When we watch a film with subtitles:

The moment a subtitle (or any other form of text) is displayed on screen, our eyes are immediately drawn to it – regardless of whether it is needed, or even understood. Once the subtitle is read, our attention turns to the image, then often to faces, which means viewers of subtitled films have less time to explore the screen than that of original viewers. 

The faster the dialogue, the more time the viewer spends on the subtitles – meaning less time is left to scan the image (as in this example).

The pace of a subtitled film is therefore not only determined by its editing, but by the way in which subtitles are displayed and read by the viewers. Reading subtitles increases the pace at which a film is seen (and felt).

On some occasions, mise-en-scène may interfere with the subtitles, which may render the subtitles illegible (as in this humorous take on this subtitling faux-pas).

On other occasions, reading the subtitles may prevent the audience from viewing the images properly – this is knows as ‘subtitling blindness‘. It usually happens:

  • In shots featuring dialogue/narration over on-screen text.
  • If a scene starts with dialogue, or contains fast dialogue throughout, but there is simultaneously an important visual element that needs to be conveyed.
  • In shots where the subtitles cover important visual elements.
  • In short duration shots that contain dialogue.

In conclusion, if subtitling is not done carefully, viewers end up missing a significant portion of the imagery compared to the viewing experience of the original audience – this risks audiences watching the film in such a different fashion to the original intent that, in essence, it becomes a different film. A case in point is this clip from the short documentary Joining the Dots. Unlike the viewers of the original version (see the orange dot on the screen), the viewers of the subtitled version (blue dot on the screen) miss the shot showing the protagonist’s walking stick (between 1:03 and 1:07) because they are reading the subtitle. In this and other examples of subtitling blindness, there is a risk that the audience may be watching the film in such a different fashion to the original intent that, in essence, it becomes a different film.

               Inclusive subtitling

Viewers with hearing loss tend to locate the subtitles faster than hearing viewers, taking more time to read them, but making up for this by showing a very proficient visual perception.

Unlike standard subtitling, these subtitles include non-verbal diegetic sound information that may be required in order to understand the story, such as:

  • Identifying the speaker, which is one of the key priorities for viewers with hearing loss, and is carried out by means of dashes/chevrons, colours (white, yellow, cyan and green, and normally in order of the character’s importance), name tags or displacement.
  • Description of the tone and mood of the character’s dialogue, which can cover volume, intensity, emotions and accents.
  • Description of sound effects.
  • Description of music and/or transcription of lyrics.

Despite its attempt to describe sounds, SDH:

  • Distil and reduce complex soundscapes to single descriptions.
  • Formalise sounds (reading a description of an accent or difficult speech patterns tends to look more formal than its corresponding audio).
  • Equalise sounds (as the descriptions do not normally include variations in volume).
  • Linearise sounds (as the descriptions are read consecutively, even if the sounds are simultaneous).

               Creative subtitles

Creative subtitles respond to the specific qualities of every film, giving the subtitlers and filmmakers more freedom to create an aesthetic that suits that of the original film. They are part of te image and contribute to the typographic and aesthetic identity of the film.

Instead of being constrained by standard conventions, these subtitles experiment with the following aspects:

  • Font: instead of being bound by the usual subtitling fonts (Arial, Verdana, etc.), creative subtitlers (in collaboration with filmmakers) consider many other fonts on the basis of their legibility, how they contribute to the typographic identity of the film, and how they interact with other on-screen text.
  • Size: it can be altered to indicate distance or volume.
  • Placement: this is the most caracteristic feature of creative subtitles, which are often placed in different positions to improve legibility, character identification, or for aesthetic reasons – often allowing the viewers to explore the image fo much longer than they would be able to with standard sutbitles.
  • Effects: creative subtitles can also play with movement, display mode, and interaction with the characters’ movements.

Research shows that creative subtitles allow subtitling viewers to spend more time on the images, helping to bridge the gap between the experience of the original viewers and that of the viewers to translated/accessible versions, while at the same time providing an exciting opportunity for collaboration and innovation between filmmakers and translators.

               Sign language interpreting

In audiovisual translation, interpreting is usually carried out from an oral language into a sign language. SLI is usually a translation and accessibility mode that is added to the product after postproduction.

SLI is usually broadcasted in the lower right corner of the screen in a separate box or without a box showing only the image of the interpreter on top of the original image. In some cases, the audiovisual content is made smaller to offer a bigger image of the interpreter. The image of the interpreter can be continuous or intermittent. When the image is continuous, if there are elements that do not need interpreting, the interpreter stays on the image but does not move. In the intermittent mode the image of the interpreter disappears when there is no content to be interpreted.

Interpreters are usually dressed in one solid colour that contrasts as much as possible with his/her skin colour. When SLI is projected on a box, a monochrome background that contrasts as much as possible with the interpreter’s clothes is recommended to facilitate comprehension and avoid fatigue in the final user. Bearing this in mind, jewels, accessories or wearing nail polish is not recommended. Nevertheless, in performances in which clothing or other visual accessories are an important element, in carnival for instance, SLI can relate to this aesthetic component.

In live TV events, and taking into account the inherent fatigue to all types of interpreting, there are usually two (or more) interpreters that take turns. In pre-recorded events, on the contrary, it is usual to see just one interpret interpreting the utterances of all characters and changing his body position or doing the sign of each character to indicate who is talking.

Sometimes, live interpreting for TV is carried out from a different location with limited access to the visual elements of the original discourse. 

When SLI is offered in live events, the interpreter usually stays at a corner of the scene and the user need to choose between looking at the visual elements of the performance or looking at the interpreter. Although this situation is more noticeable in staged live events, because the distance between interpreter and action is greater, the need to look closely to the interpreter affects also SLI in TV or any other media – the attention needed to follow the interpretation prevents from dedicating time to understand and enjoy other visual elements on screen or stage.

In artistic performances, such as theatre or dancing, SLI moves away from the formality to which users are used to in the access to news, debates and conferences. This, along with the fact that artistic performances have more action than informative events, increases the use of classifiers over signs, which allows for more dynamism and easier visualization of the action. In sign languages classifiers are usually used to communicate movement, location or appearance or something or someone in a more visual way than spelling it sign by sign, which makes classifiers much more useful than signs to express actions or narratives. Once a concept (e.g., car) is referred to by its sign (hands on steering wheel) and its classifier (extended hand palm facing down), the signer can move that hand to communicate how fast the car is going, if it hits the brakes, how it takes the curves, how it was parked, etc. without referring sign by sign to those concepts (fast, hit the brakes, park, etc.), just moving the hand accordingly and complementing it with body positioning and facial expression. This is inherent to all sign languages and allows for a more dynamic narrative. One classifier can relate to different concepts —in the previous case, it can designate a car but also a book, a door, a bed, a foot, a shark… it will depend on the context. This is a strategy that is usually more useful to express dynamic actions than static ideas, but classifiers are also used to communicate about form, position or size of objects or people. They are, therefore, used in all types of messages, regardless of their communicative function or amount of action.

Creativity in sign language (interpreting)

SL as a communication language in audiovisual products, or SLI, can be incredibly creative or very standardized/normative, or can be situated in any point in the continuum between these two poles. There are various ways in which creativity can be implemented in SL(I) and they need to be considered for each case independently, both by the artistic team of the audiovisual product and by the interpreters and signing people involved in it.

Creativity in SL(I) of an audiovisual product can be incorporated external to the language itself, both in postproduction or in earlier phases. For example, visual effects, such as transparency or blurriness can be added to the interpreter’s image to inform about voices in the distance, back vocals in songs or incomprehensible utterances, among others. Other creative options may include:

Elements or objects in the interpreter’s box.

Moving the interpreter’s image through the screen in pre-recorded programmes.


An interpreter the audience can relate to (because of his/her age, for example).


Change of clothing.

Integrating the interpreter in the cast.

Create a new signed audiovisual product (in this case signed by a Deaf celebrity) that replicates the artistic view of the original product without pretending to be a copy of it.

It is worth mentioning that the huge majority of SL interpreters and people in charge of the decision making regarding SL(I) in audiovisual products are still hearing people. Nevertheless, actions that include signing and non-signing D/deaf people such as D/deaf accessibility consultants (as in the case of Movistar 5S), professional Deaf interpreters or D/deaf people in the cast crew and as part of the artistic or technical team are gaining a foothold in the industry. This might become the norm in the future but, as for now, it is still a way of incorporating creativity in SL(I) in audiovisual production. Such actions can lead to an improvement in the quality of access to content while allowing a significant progress in accessibility to creation.

Creativity can also be included in the sign language itself, as well as it can be incorporated to any oral language. In this sense, SL(I) can easily move away from a formal and standardized expression in favour of a communication style which is more in line with the artistic view of the product that is being signed or interpreted. This can be done by making the most of the space, body movement, facial expression, rhythm, non-standardized signs, and classifiers, among others. It goes without saying that communicative and artistic skills of the signer and the communication between him/her and the creative team are a key aspect to achieve this.

Although creativity in SL(I) can be incorporated in the post-production phase, as we have seen in previous examples, in some theatre and dance companies, such as this one, SL(I) is part of the performance from the beginning. In such cases, SL(I) is not a post-production element, but a central part of the performance, which is thought of and incorporated from the very beginning. With this approach, signers and interpreters are no longer external agents but part of the cast and crew.

With all this in mind, we can say that creativity in SL(I) can turn this accessibility mode, which was initially addressed to the Deaf communities, into an artistic visual element for hearers and non-signers. This can happen, for example, when a visual element is added to performances that are based only or mainly on sound, like music or concerts. In Spain, the singer Rozalén is known for performing always on stage with her interpreter, Beatriz Romero. Her interpreter is also a big part of some of her video clips. This started as an initiative to bring her music closer to Deaf people, but it has become much more than that —it is now a central element in her concerts that is enjoyed and understood by both, signers and non-signers, D/deaf and hearing.

Taking this idea of inclusion of SL(I) not just for Deaf signing people, a new form of visual performance based on sign language and body movements emerged —Visual Vernacular. This type of performance is understood by signers and non-signers and it has been incorporated into new theatre companies and initiatives that combine Visual Vernacular with oral and signed languages to offer more inclusive performances.

Many of the examples mentioned above are not yet an extended practice in filmmaking, where the vast majority of SLI is incorporated after post-production and in a standardized way and where SL as a language, not as an accessibility mode, is still scarcely represented. It is true that every day more and more films incorporate SLs, but they are normally represented as a minority language (as in The Bélier Family) and, in some few cases as the main language (as in the web series produced by IdenDeaf Mírame cuando te hablo, or programmes from BSL Zone, in which great part of the cast is Deaf). There is no doubt that this helps raise awareness on hearing impairment, minoritized sign languages, and Deaf communities and cultures while it promotes access to content. Nevertheless, there is still a long way to go to achieve a real inclusive approach in which SL(I) is taken into account during (pre-)production of films and that allows making the most of the artistic and creative possibilities of this form of communication.

Audio description

The AD has to convey visual information in verbal form so that the audience is left in no doubt as to the location of the scene, the identity of the speaker, their physical appearance (facial expressions, body language), and any action taking place.

The AD should avoid masking the dialogue or important sound effects, which means that it is heavily constrained by the duration of the “gap” between bursts of dialogue. If the gap is very short, the AD can be too quick for the audience to follow and alters not only their understanding but also the pace of the scene, as in this example from Frozen.

AD needs to identify the characters so the audience knows who is speaking. This can be avoided if the speakers identify themselves or are identified by another character. This means working with the script writer from the beginning.

In the same way, locations that are recognisable from the image can be identified in the script. Alternatively, an establishing shot can be created with no conflicting dialogue, such that the describer has time to add a suitable description.

The AD does not need to include self-explanatory sound effects (a doorbell or a gunshot), but others that can only be interpreted by the accompanying visuals.

While guidelines advise against the description of cinematic information (camera movements, editing transitions, etc), research shows that most viewers appreciate this additional content.

The information that has been left out of the AD can be included in an audio introduction – a standalone description that can be made available either before the film or online, and that details information about the film’s visual style, fuller descriptions of characters and settings, a brief synopsis, and even cast and production details.

In foreign or multilingual films featuring subtitles, the audio describer reads the subtitles aloud for the audience, further reducing the amount of time available for audio description. Alternatively, AD may be added to the dubbed version. The danger here is that all the foreign characters no longer “sound” foreign.

Humour is particularly challenging, and often lost in the process – the length of time it takes for the audio describer to fully detail the joke usually spoils the comic timing.

Although most guidelines advocate the use of objective and cold descriptions, an Accessible Filmmaking approach allows filmmakers to have a say in the process – ideally incorporating their creative vision so that the tone of the AD track’s language and the delivery captures the tone of the images seen by the original viewers.



It pays to start thinking about translation and accessibility as early as possible. Outlined below are the different stages of production. Here you will learn at which stage each specific accessibility activity should take place.

            1. Development and pre-production
            2. Production
            3. Post-production
            4. Pre-distribution

                 Development and pre-production

Translation in the scriptwriting of multilingual films:

  • When dealing with different languages, allowing for collaboration with native speakers – and even for co-creation of certain lines – can provide the dialogue with a degree of freshness and originality.
  • If this co-creation is not possible, it would be advisable to involve a professional translator early on in the process. This person can be responsible for ensuring foreign dialogue sounds natural, and once the film moves into post-production they can oversee the dubbing or subtitling.
  • Films seeking co-production funding – particularly co-productions across territories speaking different languages – will often require delivery of the script in the language native to the respective company. A professional translator should be used for this – if possible this person should be used for subsequent translations of the film (for dubbing or subtitling).

Provision of pre-production and funding materials (scripts, storyboards, pitch decks, etc):

  • Translators don’t normally have access to pre-production material, which can be essential to finding the right tone and style needed when it comes to translations/audio descriptions.


On-site translation/interpreting

  • In multilingual films, this may be needed for communication amongst the members of the crew. It is advisable to find a professional interpreter trained in film translation, who can later on produce the first translated/accessible version of the film. 
  • Having access professionals on set may help integrate performers and crew members with disabilities.


Research shows that this is the most memorable aspect of filmmaking, where directors show their control over what happens in the frame, what the viewers see, when they see it, and for how long. Unfortunately this only applies to viewers of the original film – not to those of the translated and accessible versions, as most filmmakers are unaware of the impact that translation and accessibility may have on the mise-en-scène of their films.

Settings (colour, composition and props)

  • The standard colours used to identify characters in subtitles for the deaf may clash with the film’s colour palette. A discussion between the translator and the film’s creative team may be useful in ensuring that the final colours used do not work against the original artistic intent.
  • Key props (or other visual cues) that are positioned at the bottom of the frame could be obscured by standard subtitles. An on-set discussion between filmmakers and translators could prevent this.
  • Those filmmakers who are interested in exploring the potential of creative subtitling may want to consider the possible placement of titles during principal photography. Although the titles themselves are obviously produced in post-production, there will be no freedom to experiment with the composition of the image. An early awareness of the process could help the two elements work in tandem.


If filmmakers are to reach foreign or deaf audiences, they may want to take steps to ensure that costumes don’t clash with the subtitles (typically white, yellow, cyan and magenta), rendering them illegible.


  • In films where performances are heavily focused on the actor’s voices, the dubbing schedule should make room for both discussions between the translator and the dubbing actors, as well as ample time for multiple takes. In cases where the filmmakers feel it is integral to keep the performance of the original actors, it may be worth discussing with the distribution company whether subtitling is a better option.
  • In cases where the filmmakers feel it is integral to keep the performance of the original actors, it may be worth discussing with the distribution company whether subtitling is a better option.

  • Filmmakers may want to consider which type of voice(s) and accents would be most suitable to deliver the AD commentary to complement the actors’ voices in the film.

Cinematography: close-ups

The involvement of a translator may be useful to flag any potential issues that the use of close-ups can cause in the translated/accessible version of the film:

  • In dubbing, close-ups must be lip-synched, which makes it difficult to be faithful to both the content and style of the style of the original dialogue.
  • In subtitled films, the use of bottom-placed text during a close-up is likely to obscure the character’s mouth, negatively impacting the aesthetic and preventing deaf viewers from using lipreading in order to follow the dialogue. 

The director and cinematographer should be aware of this, regardless of whether or not using creative subtitles or adjusting the framing are feasible solutions.

Speech recognition and the transcripts of documentaries

  • When a transcription of footage is needed in order to edit, this can be done by live subtitlers, who normally use speech recognition and who are considerably faster than manual transcribers.
  • Along with the transcription, they can also prepare the template for subtitles and SDH, which can then be used as a basis for translation into other languages.


Editing and translation/accessibility

The rule of six: according to acclaimed editor and sound designer Walter Murch, when editing scene there are six closely interconnected priorities:

    • To remain true to the emotion of the scene
    • To advance the story
    • To make rhythmic sense
    • To guide the audience’s eye
    • To maintain continuity
    • To maintain spacial relationships

If any of those priorities is altered, so will the others. Editors should consider how the presence of subtitles changes the viewers’ eye trace and, as a result, changes the way in which they understand and experience the emotion of a scene. These same considerations should be taken into account when assessing the ad script and delivery.

Transition shots and cutaways

  • When they are used with no dialogue or narration, transition shots and cutaways can bridge the gap between original and foreign viewers, as the translated film becomes – if only briefly – the same as the original.
  • When they are used with dialogue or narration, transition shots and cutaways should be left on screen long enough for viewers to read the subtitle and view the image and for the describer to identify new characters and locations.
  • When they are used with dialogue or narration and on-screen text, transition shots and cutaways will requiere a subtitle for the dialogue/narration and another one for the on-screen text, which will result in a shot that is aesthetically busy and therefore difficult to interpret meaning from.


Regardless of the translation/accessibility mode adopted, the filmmakers (in collaboration with the director of accessibility/translation) may want to produce the equivalent of a style guide, advising on how to approach specific things such as humour, character portrayal, and general tone.


The following elements may be discussed at this stage:

  • Is dubbing the best form of translation for this film?
  • Are there close-ups where a choice must be made between accurate lip synchrony or accurate translation?
  • Are colloquial language, swearwords and taboo language being eliminated (as normally happens in dubbeses)?
  • How are accents being dealt with in the dubbed dialogue?
  • Are songs being dubbed or subtitles?
  • Is the presence of other languages in the original film being marked (and, if so, how?), or does the dubbed film only feature one language?


The following elements may be discussed at this stage:

  • Is subtitling the best form of translation for this film?
  • What has been lost in the transition between oral to written language?
  • How has the subtitler dealt with discourse markers, interactional features intonation, grammar and lexical errors, registers (especially colloquial) and accents?
  • Should the songs in the film be subtitled and, if so, to what extent has it been possible to convey the content, rhythm and rhyming structure?
  • Is the presence of other languages in the original film being marked in the subtitles? If so, how?
  • In scenes where characters are speaking simultaneously, which voice should be subtitled and which omitted?
  • Should dialogue ocurring at the back of the main action be foregrounded in the subtitles (thus competing for the viewers’ attention at the same level as the images), or omitted altogether?
  • Are there any instances in which the subtitles are not legible, and what can be done to solve this, bearing in mind that the editing has already been locked?
  • Should the subtitler play with the balance between one-line and two-line subtitles in order to manipulate the pace of the film?
  • It is inevitable that dialogue is reduced when subtitling, but is it a significant amount or barely noticeable? How does this reduction affect each scene in question?
  • Are there shots or scenes where the subtitles are too fast?
  • Are there times where the overloading of subtitles significantly reduces the time viewers have to spend on the image?
  • Does the presence of on-screen text in the original film clash with the subtitles? If so, do they appear on screen long enough for the viewers to read both?
  • Are there any instances in which reading a subtitle prevents the viewer from seeing an important visual element?

Inclusive subtitling

The following elements may be discussed at this stage:

  • Should characters be identified through the use of colours, name tags, displacement or hyphens/chevrons?
  • How often should information about volume, intensity, silence, emotions and accents be indicated in the subtitles, and how should this be done?
  • What sounds should be described, how should they be described, and how often? Is it possible to liaise with the scriptwriter and sound editor so that the description used in the subtitles can be consistent with the words used in the script?
  • Is the standard use of subtitles for the title and the lyrics for known songs and the use of a description for unknown songs suitable for this particular film? In the event of a clash between dialogue and music, which one should be prioritised?
  • Subtitling blindness: does the need to include the above non-verbal information prevent the viewers from being able to watch the images?

Creative subtitling

The following elements may be discussed at this stage:

  • What font is going to be used in the subtitles? What meaning and connotations does this carry? How does it compare to the fonts used for the credits, or to any on-screen text in the film?
  • Are all subtitles the same size, or are different sizes being used to indicate volume, distance, importance, etc?
  • Are subtitles placed only at the bottom of the screen, or are other positions used? If so, what criteria is put in place to ensure key visual cues are not obscured? Are you taking into account the voluntary and involuntary movements of the eye?
  • Would the film benefit from using word-for-word subtitles, which can increase the synchronicity between the sound/music/dialogue and the subtitles? 
  • Does it make sense to have the subtitles fade in and out (instead of simply pop up) in order to indicate pauses, reflections, or any other effect?
  • Would the film’s subtitles benefit from using pictures, speech bubbles or chat-like features?
  • Is it worth experimenting with position and size to indicate depth and background dialogue?
  • Could the subtitles work in tandem with the on-screen action? For example, a visual cue prompts the sudden appearance of text.

Sign language interpreting

The following elements may be discussed at this stage:

  • Will this product be distributed with sign language interpreting?
  • Where will the interpreter be placed in the image? Will the SLI image block important visual content?
  • Will the interpreter be in a box or will his/her image only be superimposed?
  • Will it be a continuous or intermittent image?
  • Who will be the interpreter? Will there be one interpreter for all the characters? Will there be a casting?
  • Will standard interpreting be used or will there be room for creativity?
  • Will the performer’s clothes be standard or will they match the original style of the audiovisual piece?

Audio description

The following elements may be discussed at this stage:

  • In standard AD only a single voice is heard. Is that appropriate for this film?
  • Who will voice the AD?
  • Will the filmmakers be present during the AD recording session in order to “direct” the audio describer? Will they encourage the standard neutral/objective vocal style, or opt for something more emotional?
  • Will the filmmakers make the script/shooting script available to the describer?
  • Will the director or the writer want some involvement in the development of the AD script?
  • Will the filmmaker want the describer to include elements of the cinematography?
  • Will the sound designer create a mixed track weaving the AD into the original soundtrack, or is an automatic default fade sufficient?
  • Is an audio introduction required? If so, where will it be available? For example, can it be downloaded directly onto smartphones via the film’s website?
  • Will the AD be translated for accessible versions in other countries, or will each country produce their own?

                     ACCESSIBLE FILMMAKING –


The accessible filmmaking approach encourages close collaboration between the filmmakers and the director of accessibility & translation (DAT). The earlier the DAT is brought on to the project, the easier it is for them to liaise with the other professionals involved in accessibility and translation on behalf of the filmmakers.

Here is a summary of the recommended 17 steps for an accessible filmmaking workflow:

                Steps for pre-production stage

                 1. (MULTILINGUAL FILMS)

Translation and accessibility in the scriptwriting process.


Translation of script for funding.


Provision of pre-production material to the DAT.


Initial meeting with the director and production of a translation/accessibility proposal.


Recruitment of media accessibility professionals, translators and, if need be, a sensory-impaired consultant.

Steps for production


On-set or remote translation and interpreting.


(On-set or remote) discussions with the filmmaker about mise-en-scène and cinematography.


Transcription of footage for editing using respeaking (speech recognition-based subtitles).

                   Steps for post-production prior to distribution


Provision of film, script and further docs to either the DAT or the:

          • dubbing translator
          • subtitler
          • audio describer


Preparation of:

            • dubbing script
            • subtitles
            • audio description

11. (ALWAYS)

Meeting between the filmmaker/creative team and the DAT or the:

          • dubbing translator
          • subtitler
          • audio describer


Amendments to the editing of the film.

13. (ALWAYS)

Preparation (and recording) of accessible versions of:

            • dubbing script
            • subtitles
            • audio description

14. (ALWAYS)

Meeting between the filmmaker/creative team and the DAT or the:

          • dubbing translator
          • subtitler
          • audio describer


Amendments to:

          • the dubbed track
          • the subtitles
          • the audio description


Feedback from the director.

17. (ALWAYS)

Final versions of:

            • dubbing
            • subtitles
            • audio description

Preparation of translation and accessibility guide for the film.

The extensive nature of this approach obviously carries budgetary and scheduling implications. In the event that such and approach is not feasible, there are still steps a production can take to ensure that they meet the minimun requirements of accessible filmmaking.

Below are the minimum requirements – once again depending on whether translation and accessibility are considered in pre-production, production or in post-production/before distribution:

  • Provision of pre-production material to the DAT and/or the translators.
  • Initial meeting with the director and production of a translation/accessibility proposal.


  • (On-set or remote) discussions with the filmmaker about mise-en-scène and cinematography.


  • Provision of film, script and further docs to the DAT or the dubbing translator/subtitler/audio describer.
  • Meeting between the filmmaker/creative team and the DAT or the dubbing translator/subtitler/audio describer.
  • Preparation (and recording) of accessible dubbing script/subtitles/audio description.
  • Meeting between the filmmaker/creative team and the DAT or the dubbing translator/subtitler/audio describer.
  • Final dubbing, subtitles, audio description and preparation of translation and accessibility guide for the film.



The budgetary implications of accessible filmmaking:

Standard package

TOTAL (approx costs): £5000 (min)

Audio description: £2,300

      • Script: £700
      • Recording: £500
      • Studio Hire: £400
      • Sound Editor: £500
      • Meetings/Amendments: £200

SDH: £1,100

      • Origination: £650
      • Proofreading: £250
      • Meetings/Amendments: £200

Director of accessibility: £1,600

(During post-production, 8 days)

      • Recruitment
      • Coordination
      • Meetings
      • Quality control
      • AD recording
      • Subtitling guide
Additional extras

TOTAL (basic + extras; aprox costs): £11,000 (min)

Audio introduction: £ 300

      • Script: £100
      • Recording: £100
      • Studio: £100

Creative/Integrated subtitles: £4,100

      • Concept: £500
      • Implementation: £3,600

Director of accessibility: £600

(During pre-production, based on 3 days)

      • Recruitment
      • Budget
      • Meeting
      • Proposal

Sensory-impaired consultancy: £500

      • SDH: £250 per day
      • AD: £250 per day

English language template: £500

To be used by interlingual subtitlers.

If you require more information on these figures for your budget, please contact:





Millions of viewers watch either translated or accessible versions of a film – yet this huge audience is normally disregarded during the filmmaking process. Any discussions regarding translation and accessibility are an afterthought.

For an average film, these viewers bring around 50% of the total revenue, and yet only 0.01%-0.1% of the budget is usually spent on translation/accessibility. Rather than a production expenditure, the process is relegated to the distribution stage, produced in a matter of days, for very little remuneration without any involvement from the creative team.

As a consequence, a second-rate experience is being provided to the viewers of the translated and accessible versions. Without understanding the original artistic intent of the filmmakers, numerous decisions may have been made that negatively affect the way the film is seen, understood or felt.

Filmmakers are often unaware of this problem, which explains why they are happy to let translators and distributors make decisions that work against their original intent. This clearly has a significant impact on the way films are received – even though the notion of a director allowing the standard version of their films to be screened without first approving the cut is unheard of.

As an alaternative to the industrialised model that merely considers translation and accessibility as an afterthought, simply relegating it to the distribution stage, Accessible Filmmaking integrates it as a key part of the filmmaking process (and, ideally, from the pre-production stage). By considering the viewing experience of foreign and sensory-impaired audiences, this allows filmmakers to regain full control of how their films are received.

This process has been implemented worldwide in training, research and professional practice, and has been endorsed by both Ofcom and the United Nations’ ITU agency on accessibility.

Accessible filmmaking…

… makes financial sense – it helps reach a wider and more diverse audience.

… is more collaborative – it provides better working conditions for translators and media accessibility experts, whose remuneration can be built into the main budget of the film and who, for the first time, find the oppotunity to be part of a team with which to consult and share decisions.

… is neither time-consuming nor costly, particularly if it’s built into the film’s schedule (and any discussions) from the beginning. It may also help filmmakers access accessibility funding streams and it is a requirement in some countries.

… does not force filmmakers to change their films, but instead reveals how their films change in translation and accessibility, and what options are available to them so as to ensure that their vision is maintained when it reaches foreign and sensory-impaired audiences.

… enables filmmakers to see and understand their films differently, as well as discovering exciting new ways of telling stories.

… simply makes sense! If we were architects, we would never scoff at the idea of including a disabled toilet in the initial design of a building – rather than added at the end, as an afterthought – and it should be the same with filmmaking. There will come a time when it’s commonly accepted that all films should be produced with accessibility in mind – after all, shouldn’t cinema be for everybody?

Most importantly, accessible filmmaking applies to everybody who makes films – regardless of whether they are professionals working on the latest blockbuster, or amateurs uploading homemade films to YouTube. There is a huge, underserved audience – in the millions – of people with hearing and sight loss. And in a global marketplace, the number of non-English speaking territories means we can no longer afford to side-line foreign audiences. It’s time for accessible filmmaking to become the new normal.