Yep, the tower of babel ..  
We live in the age of Information and Information Exchange...  humans are spongey and adaptable and evolve communication all the time ..

And if everyone is using different interfaces ... how can we understand ?

Lets make a plan :D

When MRL starts we want it to speak, listen to, and understand "our" language .. whatever that may be .. French, English, Swedish, Klingon ...

So I started with an MRL instance defaulting a single Locale to whatever language the host computer is currently using.

You can see this locale in Python with the following lines of code :

print(runtime.getLocale())
print(runtime.getLanguage())
print(runtime.getDisplayLanguage())

This returns

en_US
en
English
Yup, I speak "Yank" :D
 
I'm probably more eloquent in code than English, but if you want to use my interface you'll need "en"
 
Moz4r suggested one Locale to rule them all.  I believe he was interested in a single point where Locale could be retrieved or change, and all services would follow (if they could). 

The runtime's locale is initially copied from the host computer's locale, but it can be set with the following code

runtime.setLocale("fr")
 
then 
print(runtime.getLocale())
print(runtime.getLanguage())
print(runtime.getDisplayLanguage())
gives me
fr
fr
français
Potentially other aspects of MRL or its GUI will change, if they start using this method as their source of determining what language they should be using.
 
I'm in process of trying to make all SpeechSynthesis services behave in the same way.  They all derive from AbstractSpeechSynthesis and this reduces the code (and the bugs) of the system.

If the behavior is consistent for all Speech services then its behavior should only be in AbstractSpeechSynthesis.
 
Here are some rules of Speech Synthesis
  • All SpeechSynthesis services have a "default" voice
  • If the default voice is not currently set, the service will ask Runtime what the locale language is and attempt to get the first appropriate voice for that language if defined.  (many "ifs" here) .. If no match is made, first language, first voice will be default ... (determined by the implementer of the service)
  • getVoices should only be implemented by AbstractSpeechSynthesis 
  • All SpeechSynthesis will "add" voices with the addVoice method - the AbstractSpeechSynthesis will create the data structures to maintain the voices.  This makes UI development easy, because the data structures to represent voices will be the same for all SpeechSynthesis services !
  • (more to come) :)

 

kwatters

6 years 5 months ago

getVoices  should be abstract and left up to the implementing subclass to implement.

getVoices should only return the voices for that specific subclass ...  

no?

 

sorry i didn't read the rest of your post.. so you're proposing adding addVoice to the abstract speech synth service?  ok..  but this means it's up to the subclass to register all of it's voices , perhaps in the constructor?

Also,  I don't think the addVoice should be in the interface.. it could be in the abstract speech syn class.. but i don't think addVoice is something generic to a Speech Synthesizer.. 

 

This all seems very un-natural to me.  it really seems like getVoices should be abstract and each subclass should implement it to return a list of voices that are supported by the implementing class.

 

Edit 3... ultimately worky is king...  in reality not all implementations of SpeechSynthesis necessairly need to subclass abstract speech synthesis..  and with this addVoice method, how to you insure that the subclass actually calls it appropriately..    again.. worky is king.. so  . this is just semantics of how we make it worky.

GroG

6 years 5 months ago

In reply to by kwatters

Ahoy !

so you're proposing adding addVoice to the abstract speech synth service?  ok..  but this means it's up to the subclass to register all of it's voices , perhaps in the constructor?

Also,  I don't think the addVoice should be in the interface.. it could be in the abstract speech syn class.. but i don't think addVoice is something generic to a Speech Synthesizer.. 

addVoice is an abstract protected method of AbstractSpeechSynthesizer.. its not part of SpeechSynthesizer

getVoices is implemented in AbstractSpeechSynthesizer, it returns a normalized list of Pojo Voices

AbstractSpeechSynthesizer.getVoices  calls an abstract loadVoices() on the subclass 

The subclass uses addVoice within loadVoices ..

I'm trying to make this easy on the implementer AND have some common normalized data structures we can use for UI and serializing ...

If I did not do it this way there would be 8 different data structures with different publishing points and different UIs for each - not to mention different serialization for each service around this data :P

I'll do the current 9 Speech Services - and the goal will be minimal amount of code ot implement each .. its basically if an implementation of loadVoices using addVoice .. and speak

That's the current plan ... until a better path is revealed ..