Replace gnome-speech with a More Flexible Speech Backend
Replace gnome-speech with a more flexible speech backend, usable by both GUI and console accessibility applications alike. Gnome-speech is GNOME specific, and will be going away upstream, however the next best alternative speech-dispatcher, needs extra work before its a viable alternative to be used with Orca and Linux accessibility applications in general.
Blueprint information
- Status:
- Complete
- Approver:
- Martin Pitt
- Priority:
- Medium
- Drafter:
- Luke Yelavich
- Direction:
- Needs approval
- Assignee:
- Luke Yelavich
- Definition:
- Approved
- Series goal:
- Accepted for karmic
- Implementation:
- Implemented
- Milestone target:
- ubuntu-9.10-beta
- Started by
- Luke Yelavich
- Completed by
- Martin Pitt
Whiteboard
pitti, 2009-06-12:
- Is speech-dispatcher scheduled to become an official part of GNOME, so that it will get upstream support and integration? I'm particularly thinking of maintaining compatiblity with ATK in general, and gnome-at-
TheMuso: speech-dispatcher has nothing to do with ATK, or the GNOME accessibility framework. gnome-speech used CORBA, and either needs to be ported to something else, or replaced. Speech-dispatcher would become an external dependency of GNOME, and I plan to work with the gnome-orca/
This work is also very much needed, since there is nobody else in the accessibility community who has the time, or who is willing to do the necessary work to get a suitable replacement for speech services by the time GNOME 3.0 roles around.
- I'm not entirely sure that it is a good idea to translate log files. Especially not if their purpose is debugging, since you probably can't send them to bug reports that way. Do they contain information that the user actually needs to see? If so, these bits should probably go in a separate "user log" which is translatable?
TheMuso: Fair enough, I've removed that from the spec, as the logs are only really for debugging purposes at this point.
- Why is it principally easier to implement an entirely new Unix socket backend than to use different TCP/IP ports? Do you want to store the Unix sockets in the user's home directory?
TheMuso: I've decided to work out a way to ensure each user session on a system has its own port. See the spec. In terms of time spent on code, the IPC is the least of our worries at this point.
- This involves quite a large set of code changes; do you think you'll be able to fit these into your schedule, together with the audio maintenance? Or are there more people working on these improvements?
TheMuso: Yes, I will be getting help from the community with this work, mostly relating to language/
pitti, 2009-06-16: Thanks, approved
Work items:
Adjust speech-dispatcher logging mechanism to not produce any logs by default: DONE
Add command-line flags to enable logs to be produced for debugging: DONE
Add command-line flags to force the hostname/port to listen on: DONE
Extend the C and python API to use a unique port for individual user sessions: DONE
Implement graceful audio fallback: DONE
Implement multi-level punctuation handling, depending on the client, chosen synthesizer, and language: POSTPONED
Implement multi-level capital letter handling, depending on the client, chosen synthesizer, and language: POSTPONED
Implement multi-level word pronounciation handling, depending on the client, chosen synthesizer, and language: POSTPONED
Implement indexing of text strings by word, in addition to sentence/utterance: POSTPONED
Implement speech-dispatcher auto-spawning by client API/bindings: DONE
Port Dasher to speech-dispatcher: POSTPONED
Port Gok to speech-dispatcher: POSTPONED
Promote speech-dispatcher and libdotconf to main: DONE
Debug speech-dispatcher's pulseaudio output support: TODO
Implement mechanism to allow for the system speech-dispatcher daemon to communicate with a user session speech-dispatcher daemon: POSTPONED