MMMmm, hi ! Just an idea to discute about.

MRLcloudServer can be used , exemple to :

  • USE MULTIPLE TTS WEBSERVICES TO CACHE ( high availability and better lantency )
  • OFFICIAL TTS SERVER
  • SHARED KNOWLEDGE
  • VERSION CHECKING ( if mrlclient or script is outdated and broken )

What do you think about this architecture about tts :

 

 

MaVo

7 years 4 months ago

I would be annoyed by the different voices, but otherwise I think it's a good plan.

GroG

7 years 4 months ago

Ya totally moz4r :D

Not just for voice, but vision, recognition, segmentation, and all sorts of higher level tasks which benefit from being distributed, in the cloud, and possibly cached.

WOOHOO !

Started to look into Docker for this reason too..

I never used Docker . It seem so powefull !!! And with Swarm the power can be expanded to infinite. And it is open source... wahoo . I realy need to eat documentations about. It seem a little hard...

About tts, if the begining is to publish marytts we can open 1 listener by voice+language and multithread the listeners, if marytts engine can do this. Mmmm while I write this I think myrobotlab client can be act as a myrobotlab mutlthread server, mrl is a server ... So we can use the actual work about tts services. and many others. And redesign a little to publish cache functions.

Excited about that. I don't like cloud solutions usually,but sometime like here it will kick ass !
Disapointed I didn't work on java to help you more , but java maybe one other day

Do you want I ask santa claus a 1gb/s server ( little config to start, it is expansive, later we surely find generous and motivated datacenter boss they are interested to host open source IA gracefully, I hope so)

 

 

Docker is just a way to deploy an app..  Reality is.. we need an app to deploy to the cloud first.  If we want to try this, we would need a cloud based server (maybe one is kicking around somewhere.) and we need a service to deploy onto that server that would be useful.

Then we need to integrate support for that cloud service into MRL...  

And lastly, and most importantly... (perhaps unfortuantely?)  It's not free to run servers in the cloud.  So, we should think about some sort of a subscription for it to help pay the bills of running the server.

now the question is.. what api end points should we expose from the cloud?

I remember a past life , to cache the web i was using squid. If i test it about the weak availability of naturalspeaking do you think its a good option ?

if not I have cleaner idea like fetch the mp3 and simply publish it after a standardized rename ( like concat lang+voice+text.mp3 ) > The urlget of course will be standardized

Possible to add in naturalspeak service : a debug option to add an another url to fetch with parameters ?

So we can do some test with squid / varnish or others homemade dev