机器学习实战:朴素贝叶斯分类器

实例:使用朴素贝叶斯进行文档分类

构建一个过滤器,过滤在线社区的留言板中带有侮辱类的语言。

机器学习实战:朴素贝叶斯分类器

1、准备数据:从文本中构建词向量

def loadDataSet():
    postingList=[['my', 'dog', 'has', 'flea', 'problems', 'help', 'please'],
                 ['maybe', 'not', 'take', 'him', 'to', 'dog', 'park', 'stupid'],
                 ['my', 'dalmation', 'is', 'so', 'cute', 'I', 'love', 'him'],
                 ['stop', 'posting', 'stupid', 'worthless', 'garbage'],
                 ['mr', 'licks', 'ate', 'my', 'steak', 'how', 'to', 'stop', 'him'],
                 ['quit', 'buying', 'worthless', 'dog', 'food', 'stupid']]
    classVec=[0,1,0,1,0,1] #1代表侮辱性文字,0代表正常言论
    return postingList,classVec


def createVocabList(dataSet):
    vocabSet=set([])
    for document in dataSet:
        vocabSet=vocabSet|set(document) #取并集
    return list(vocabSet)


def setOfWords2Vec(vocabList,inputSet):
    returnVec=[0]*len(vocabList)
    for word in inputSet:
        if word in vocabList:
            returnVec[vocabList.index(word)]=1
        else:
            print("the word:%s is not in my vocabulary!" %word)
    return returnVec

机器学习实战:朴素贝叶斯分类器

              机器学习实战:朴素贝叶斯分类器          机器学习实战:朴素贝叶斯分类器

机器学习实战:朴素贝叶斯分类器

 

2、训练算法:从词向量计算概率

公式:机器学习实战:朴素贝叶斯分类器

机器学习实战:朴素贝叶斯分类器

import numpy as np
def trainNB0(trainMatrix,trainCategory):
    numTrainDocs=len(trainMatrix)
    numWords=len(trainMatrix[0])
    pAbusive=sum(trainCategory)/float(numTrainDocs)
    p0Num=np.zeros(numWords)
    p1Num=np.zeros(numWords)
    p0Denom=0.0
    p1Denom=0.0
    for i in range(numTrainDocs):
        if trainCategory[i]==1:
            p1Num+=trainMatrix[i]
            p1Denom+=sum(trainMatrix[i])
        else:
            p0Num+=trainMatrix[i]
            p0Denom+=sum(trainMatrix[i])
    p1Vect=p1Num/p1Denom
    p0Vect=p0Num/p0Denom
    return p0Vect,p1Vect,pAbusive

机器学习实战:朴素贝叶斯分类器

3、测试算法:根据现实情况修改分类器

机器学习实战:朴素贝叶斯分类器

机器学习实战:朴素贝叶斯分类器

def trainNB0(trainMatrix,trainCategory):
    numTrainDocs = len(trainMatrix)
    numWords = len(trainMatrix[0])
    pAbusive = sum(trainCategory)/float(numTrainDocs)
    p0Num = np.ones(numWords); p1Num = np.ones(numWords)      #change to ones() 
    p0Denom = 2.0; p1Denom = 2.0                        #change to 2.0
    for i in range(numTrainDocs):
        if trainCategory[i] == 1:
            p1Num += trainMatrix[i]
            p1Denom += sum(trainMatrix[i])
        else:
            p0Num += trainMatrix[i]
            p0Denom += sum(trainMatrix[i])
    p1Vect = np.log(p1Num/p1Denom)          #change to log()
    p0Vect = np.log(p0Num/p0Denom)          #change to log()
    return p0Vect,p1Vect,pAbusive


def classifyNB(vec2Classify,p0Vec,p1Vec,pClass1):
    #运用朴素贝叶斯公式:
    p1=sum(vec2Classify*p1Vec)+np.log(pClass1) #p1Vec已经取过对数,故pClass1也取对数,该步骤为元素相乘,log(ab)=log(a)+log(b)
    p0=sum(vec2Classify*p0Vec)+np.log(1.0-pClass1) #pClass0=1.0-pClass1
    if p1>p0:
        return 1
    else:
        return 0


def testingNB():
    listOPosts,listClasses=loadDataSet()
    myVocabList=createVocabList(listOPosts)
    trainMat=[]
    for postinDoc in listOPosts:
        trainMat.append(setOfWords2Vec(myVocabList,postinDoc))
    p0V,p1V,pAb=trainNB0(np.array(trainMat),np.array(listClasses))
    testEntry=['love','my','dalmation']
    thisDoc=np.array(setOfWords2Vec(myVocabList,testEntry))
    print(testEntry,'classified as:',classifyNB(thisDoc,p0V,p1V,pAb))
    testEntry=['stupid','garbage']
    thisDoc=np.array(setOfWords2Vec(myVocabList,testEntry))
    print(testEntry,'classified as:',classifyNB(thisDoc,p0V,p1V,pAb))

机器学习实战:朴素贝叶斯分类器

机器学习实战:朴素贝叶斯分类器