首页 > 代码库 > 使用余弦定理计算两篇文章的相似性
使用余弦定理计算两篇文章的相似性
使用余弦定理计算两篇文章的相似性:(方法论,细致易懂版)
http://blog.csdn.net/dearwind153/article/details/52316151
python 实现(代码):
http://outofmemory.cn/code-snippet/35172/match-text-release
(结巴分词下载及安装:http://www.cnblogs.com/kaituorensheng/p/3595879.html)
java 实现(代码+方法描述):
https://my.oschina.net/leejun2005/blog/116291
(以上是我参考的资料)
-----------------------------------------------------------------------------------------------------------------------------------------------
我用的是Python实现的,需安装结巴分词的Python包
代码如下:
#!/usr/bin/env python # -*- coding: utf-8 -* import re from math import sqrt #You have to install the python lib import jieba def file_reader(filename,filename2): file_words = {} ignore_list = [u‘的‘,u‘了‘,u‘和‘,u‘呢‘,u‘啊‘,u‘哦‘,u‘恩‘,u‘嗯‘,u‘吧‘]; accepted_chars = re.compile("[\\u4E00-\\u9FA5]+") file_object = open(filename) try: all_the_text = file_object.read() seg_list = jieba.cut(all_the_text, cut_all=True) #print "/ ".join(seg_list) for s in seg_list: if accepted_chars.match(s) and s not in ignore_list: if s not in file_words.keys(): file_words[s] = [1,0] else: file_words[s][0] += 1 finally: file_object.close() file_object2 = open(filename2) try: all_the_text = file_object2.read() seg_list = jieba.cut(all_the_text, cut_all=True) for s in seg_list: if accepted_chars.match(s) and s not in ignore_list: if s not in file_words.keys(): file_words[s] = [0,1] else: file_words[s][1] += 1 finally: file_object2.close() sum_2 = 0 sum_file1 = 0 sum_file2 = 0 for word in file_words.values(): sum_2 += word[0]*word[1] sum_file1 += word[0]**2 sum_file2 += word[1]**2 rate = sum_2/(sqrt(sum_file1*sum_file2)) print(‘rate: ‘) print(rate) file_reader(‘thefile.txt‘,‘thefile2.txt‘) #该片段来自于http://outofmemory.cn
使用余弦定理计算两篇文章的相似性
声明:以上内容来自用户投稿及互联网公开渠道收集整理发布,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任,若内容有误或涉及侵权可进行投诉: 投诉/举报 工作人员会在5个工作日内联系你,一经查实,本站将立刻删除涉嫌侵权内容。