Swype and similar gesture typing are elegant input interfaces well adapted to touchscreens. Can they improved?
Add "a" "an" and "the" as single buttons that can be swiped like prefixes to a word, without having to lift the finger. Possibly other common words, too: prepositions. Punctuation as suffixes.
Even without dedicated buttons, common word sequences (as above) could be swiped in one continuous gesture, maybe without the space, or with it.
Rearrange the letters to avoid ambiguity between common words. However, this is difficult because rapid gesture typing critically relies on the user having memorized the qwerty keyboard layout. Memorization is necessary because one cannot easily hunt for the next letter mid-gesture because the finger covers up a large portion of the screen where the next letter might be hiding. (This is not an issue for virtual keyboards in which keys are tapped: you can see the whole keyboard in between letters.) Perhaps display a second reference copy of the keyboard above the gesture keyboard while learning a new layout.
Add extra copies of certain letters carefully placed so that the user can learn alternate gestures which avoid misparsing.
Shrink the size of the keyboard to give more space to the app, and rely on stronger NLP to guess the word. Possibly decrease the number of rows of letters, rearranging.
Use as context, setting prior probabilities, all the words previously typed. Harder: also use as context later words. This is probably best presented as a variation of spell checking, underlining words likely misparsed, rather than changing words long after they have been typed. Later context is also available when editing the middle of text, though sentences are often not grammatical in the midst of being edited.