用于“文本到语音”和“语音到文本”的C++ API

问题描述:

我想知道在C++中是否有用于“语音识别”和“文本到语音”的良好API。我已经通过Festival,你甚至不能说计算机是否在说话,因为它是如此真实,并且也是voce用于“文本到语音”和“语音到文本”的C++ API

不幸的是Festival似乎不支持语音识别(我的意思是“语音到文本”),并且voce是在Java中构建的,由于JNI,它在C++中是一团糟。

API应该支持“文本到语音”和“语音到文本”,它应该有一套很好的例子,至少在所有者的网站之外。如果它有一套设备来识别给定的声音集合,但这是可选的,那么完美无缺,所以不用担心。

我打算用API做的事情是,当给定语音命令时,将机器人设备左右转动,并且还对我说“早安”,“晚安”等。这些单词将在程序中编码。

请帮我找到一个好的C++语音API用于此目的。如果您有权访问教程/安装教程,请及时与我分享。

+0

微软的API是http://msdn.microsoft.com/en-us/library/ms720151 (v = vs.85)的.aspx – 2013-04-30 09:31:35

如果您在Windows上开发,则可以使用MS Speech API,它们允许您执行语音识别(ASR)和文本到语音转换(TTS)。
您可以在this page上找到一些示例,以及this post中的一个非常基本的语音识别示例。

如果您在机器人中有互联网连接并且愿意为服务付费,您理论上可以使用Twilio。他们有很多不同的语言库和示例和平台http://www.twilio.com/docs/libraries

此外,检查出这个博客解释了如何构建和控制使用Twilio http://www.twilio.com/blog/2012/06/build-a-phone-controlled-robot-using-node-js-arduino-rn-xv-wifly-arduinoand-twilio.html

我发现,如果我做一个音频记录一个Arduino基于机器人(我用这个qtmultimedia)必须后手 Read more here

那么我可以上传到Google,然后把它送我回去一些JSON
然后我写了一些C++/QT这使成QML插件 这是(alpha)代码。请注意,请确保将您的FLAC FILE.flac> 替换为您真实的FLAC文件。

speechrecognition.cpp

#include <QNetworkReply> 
#include <QNetworkRequest> 
#include <QSslSocket> 
#include <QUrl> 
#include <QJsonDocument> 
#include <QJsonArray> 
#include <QJsonObject> 
#include "speechrecognition.h" 
#include <QFile> 
#include <QDebug> 
const char* SpeechRecognition::kContentType = "audio/x-flac; rate=8000"; 
const char* SpeechRecognition::kUrl = "http://www.google.com/speech-api/v1/recognize?xjerr=1&client=directions&lang=en"; 

SpeechRecognition::SpeechRecognition(QObject* parent) 
    : QObject(parent) 
{ 
    network_ = new QNetworkAccessManager(this); 
    connect(network_, SIGNAL(finished(QNetworkReply*)), 
      this, SLOT(replyFinished(QNetworkReply*))); 
} 

void SpeechRecognition::start(){ 
    const QUrl url(kUrl); 
    QNetworkRequest req(url); 
    req.setHeader(QNetworkRequest::ContentTypeHeader, kContentType); 
    req.setAttribute(QNetworkRequest::DoNotBufferUploadDataAttribute, false); 
    req.setAttribute(QNetworkRequest::CacheLoadControlAttribute, 
        QNetworkRequest::AlwaysNetwork); 
    QFile *compressedFile = new QFile("<YOUR FLAC FILE.flac>"); 
    compressedFile->open(QIODevice::ReadOnly); 
    reply_ = network_->post(req, compressedFile); 
} 

void SpeechRecognition::replyFinished(QNetworkReply* reply) { 

    Result result = Result_ErrorNetwork; 
    Hypotheses hypotheses; 

    if (reply->error() != QNetworkReply::NoError) { 
    qDebug() << "ERROR \n" << reply->errorString(); 
    } else { 
     qDebug() << "Running ParserResponse for \n" << reply << result; 
     ParseResponse(reply, &result, &hypotheses); 
    } 
    emit Finished(result, hypotheses); 
    reply_->deleteLater(); 
    reply_ = NULL; 
} 

void SpeechRecognition::ParseResponse(QIODevice* reply, Result* result, 
             Hypotheses* hypotheses) 
{ 
QString getReplay ; 
getReplay = reply->readAll(); 
qDebug() << "The Replay " << getReplay; 
QJsonDocument jsonDoc = QJsonDocument::fromJson(getReplay.toUtf8()); 
    QVariantMap data = jsonDoc.toVariant().toMap(); 

    const int status = data.value("status", Result_ErrorNetwork).toInt(); 
    *result = static_cast<Result>(status); 

    if (status != Result_Success) 
    return; 

    QVariantList list = data.value("hypotheses", QVariantList()).toList(); 
    foreach (const QVariant& variant, list) { 
    QVariantMap map = variant.toMap(); 

    if (!map.contains("utterance") || !map.contains("confidence")) 
     continue; 

    Hypothesis hypothesis; 
    hypothesis.utterance = map.value("utterance", QString()).toString(); 
    hypothesis.confidence = map.value("confidence", 0.0).toReal(); 
    *hypotheses << hypothesis; 
    qDebug() << "confidence = " << hypothesis.confidence << "\n Your Results = "<< hypothesis.utterance; 
    setResults(hypothesis.utterance); 
} 
} 

    void SpeechRecognition::setResults(const QString &results) 
{ 
    if(m_results == results) 
    return; 
     m_results = results; 
    emit resultsChanged(); 
} 

QString SpeechRecognition::results()const 
{ 
    return m_results; 
} 

speechrecognition.h

#ifndef SPEECHRECOGNITION_H 
#define SPEECHRECOGNITION_H 

#include <QObject> 
#include <QList> 

class QIODevice; 
class QNetworkAccessManager; 
class QNetworkReply; 
class SpeechRecognition : public QObject { 
    Q_OBJECT 
    Q_PROPERTY(QString results READ results NOTIFY resultsChanged) 

public: 
    SpeechRecognition(QObject* parent = 0); 
    static const char* kUrl; 
    static const char* kContentType; 

    struct Hypothesis { 
    QString utterance; 
    qreal confidence; 
    }; 
    typedef QList<Hypothesis> Hypotheses; 

    // This enumeration follows the values described here: 
    // http://www.w3.org/2005/Incubator/htmlspeech/2010/10/google-api-draft.html#speech-input-error 
    enum Result { 
    Result_Success = 0, 
    Result_ErrorAborted, 
    Result_ErrorAudio, 
    Result_ErrorNetwork, 
    Result_NoSpeech, 
    Result_NoMatch, 
    Result_BadGrammar 
    }; 
    Q_INVOKABLE void start(); 
    void Cancel(); 
    QString results()const; 
    void setResults(const QString &results); 

signals: 
    void Finished(Result result, const Hypotheses& hypotheses); 
    void resultsChanged(); 

private slots: 
    void replyFinished(QNetworkReply* reply); 

private: 
    void ParseResponse(QIODevice* reply, Result* result, Hypotheses* hypotheses); 

private: 
    QNetworkAccessManager* network_; 
    QNetworkReply* reply_; 
    QByteArray buffered_raw_data_; 
    int num_samples_recorded_; 
    QString m_results; 
}; 

#endif // SPEECHRECOGNITION_H