Twitter is testing a prompt to revise replies with "harmful" language.
The social networking platform hasn't introduced an edit button yet, but a potential new feature - which is being tested on iOS - would give users the opportunity to reconsider responses to other tweets which could be considered offensive.
The Twitter Support account wrote: "When things get heated, you may say things you don't mean.
"To let you rethink a reply, we're running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it's published if it uses language that could be harmful."
With the new feature, when users hit "send" on a reply, they will be told if the words are similar to those in posts which have previously been reported.
They will then be given the option to revise it or not.
The move is part of Twitter's drive to tackle abusive and hateful content on the platform.
Sunita Saligram - the company's global head of site policy for trust and safety - added to Reuters: "We're trying to encourage people to rethink their behaviour and rethink their language before posting because they often are in the heat of the moment and they might say something they regret."