摘要:例計(jì)算的特征值和特征向量的特征值的特征向量計(jì)算的特征值和特征向量的特征值的特征向量運(yùn)行結(jié)果的特征值的特征向量的特征值的特征向量例數(shù)據(jù)的變換為平均分散的數(shù)據(jù)利用利用個(gè)主成分軸轉(zhuǎn)換為二維數(shù)據(jù)的數(shù)據(jù)類型名稱按類別指定的顏
?
?? 例1:
import numpy as npA = np.array([[2, 3], [3, -6]])w1, V1 = np.linalg.eig(A) # 計(jì)算A的特征值和特征向量print("A的特征值: = ", w1)print("A的特征向量: = ", V1)B = np.array([[5,2,0], [2,5,0], [-3,4,6]])w2, V2 = np.linalg.eig(B) # 計(jì)算B的特征值和特征向量print("/n");print("B的特征值 = ", w2)print("B的特征向量 = ", V2)
? 運(yùn)行結(jié)果:
A的特征值: = ?[ 3. -7.]
A的特征向量: = ?[[ 0.9486833 ?-0.31622777]
?[ 0.31622777 ?0.9486833 ]]
B的特征值 = ?[6. 7. 3.]
B的特征向量 = ?[[ 0. ? ? ? ? ?0.57735027 ?0.36650833]
?[ 0. ? ? ? ? ?0.57735027 -0.36650833]
?[ 1. ? ? ? ? ?0.57735027 ?0.85518611]]
? 例2:
url:https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data
import numpy as npimport matplotlib.pyplot as pltfrom sklearn.decomposition import PCAimport pandas as pdfrom sklearn.preprocessing import StandardScaler# iris 數(shù)據(jù)的 URLurl = "xxx"# Pandas DataFramedf = pd.read_csv(url, names=["sepal length","sepal width","petal length","petal width","target"])nrow, ncol = df.shapeprint("Iris data set :", nrow, "records with", ncol, "attributes/n")print("First 5 records in iris data/n", df.head(5))features = ["sepal length", "sepal width", "petal length", "petal width"]x = df.loc[:, features].valuesy = df.loc[:,["target"]].valuesx = StandardScaler().fit_transform(x) # 變換為 平均0, 分散1 的數(shù)據(jù)pca = PCA(n_components=2) # 利用 PCAprincipalComponents = pca.fit_transform(x)# 利用2個(gè)主成分軸轉(zhuǎn)換為二維數(shù)據(jù)print("/nFirst principal axis:", pca.components_[0])print("Second principal axis:", pca.components_[1])principalDf = pd.DataFrame(data = principalComponents, columns = ["principal component 1", "principal component 2"])finalDf = pd.concat([principalDf, df[["target"]]], axis = 1)print("/nFirst 5 Transformed records/n", finalDf.head(5))fig = plt.figure(figsize = (8,8))ax = fig.add_subplot(1,1,1)ax.set_xlabel("principal component 1", fontsize = 12)ax.set_ylabel("principal component 2", fontsize = 12)ax.set_title("PCA with 2 components", fontsize = 15)targets = ["Iris-setosa", "Iris-versicolor", "Iris-virginica"] # iris的數(shù)據(jù)類型名稱colors = ["r", "g", "b"] # 按類別指定的顏色for target, color in zip(targets,colors): indicesToKeep = finalDf["target"] == target ax.scatter(finalDf.loc[indicesToKeep, "principal component 1"] , finalDf.loc[indicesToKeep, "principal component 2"], c = color, s = 40)ax.legend(targets)ax.grid()fig.show()
?? 運(yùn)行結(jié)果:
--- A --- 1.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 1.00 rank(A) = 4 --- B --- 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
rank(B) = 0 --- C --- 2.00 5.00 -3.00 -4.00 8.00 4.00 7.00 -4.00 -3.00 9.00 6.00 9.00 -5.00 2.00 4.00 0.00 -9.00 6.00 5.00 -6.00
rank(C) = 3 --- C^T --- 2.00 4.00 6.00 0.00 5.00 7.00 9.00 -9.00 -3.00 -4.00 -5.00 6.00 -4.00 -3.00 2.00 5.00 8.00 9.00 4.00 -6.00
rank(C^T) = 3
參考文獻(xiàn)
Introduction to Linear Algebra, International 4 th Edition by Gilbert Strang, Wellesley Cambridge Press.
百度百科[EB/OL]. []. https://baike.baidu.com/
本篇完。
文章版權(quán)歸作者所有,未經(jīng)允許請(qǐng)勿轉(zhuǎn)載,若此文章存在違規(guī)行為,您可以聯(lián)系管理員刪除。
轉(zhuǎn)載請(qǐng)注明本文地址:http://www.ezyhdfw.cn/yun/125084.html
摘要:提到線性代數(shù),又不得不吐槽國(guó)內(nèi)教材了,學(xué)起來真的是實(shí)力勸退。線性代數(shù)概念較多,計(jì)劃在另一篇總結(jié)基本概念,這里僅總結(jié)線性代數(shù)里一些重要概念的程序。 提到線性代數(shù),又不...
大家都知道,最近人工智能是比較火熱的,那么,人工智能主要應(yīng)用的就是機(jī)器學(xué)習(xí),但是機(jī)器學(xué)習(xí)對(duì)其要求還是比較高的。在使用機(jī)器學(xué)習(xí)處理數(shù)據(jù)的時(shí)候,會(huì)經(jīng)常性的用到One-Hot代碼,下面小編就具體給大家介紹下這種編碼的實(shí)現(xiàn)方式?! ?.為什么使用one-hot編碼? 在人工智能算法中,大家難免會(huì)遇到歸類基本特征,比如說:人的性別有男人女人,國(guó)家有日本,韓國(guó),朝鮮等。這類特征值并不是連續(xù)不斷的,反而是...
閱讀 3413·2021-11-25 09:43
閱讀 3257·2021-10-11 10:58
閱讀 2837·2021-09-27 13:59
閱讀 3140·2021-09-24 09:55
閱讀 2228·2019-08-30 15:52
閱讀 1893·2019-08-30 14:03
閱讀 2305·2019-08-30 11:11
閱讀 2077·2019-08-28 18:12