本書第一部分講述的是傳統(tǒng)的網(wǎng)絡接口N e t B I O S、重定向器以及通過重定向器進行的各類 網(wǎng)絡通信。盡管本書大部分內容均圍繞Wi n s o c k編程這一主題展開,但是, A P I比起Wi n s o c k 來,仍然具有某些獨到之處
標簽: 分 定向 網(wǎng)絡接口 編程
上傳時間: 2015-07-08
上傳用戶:戀天使569
Let the following relational tables be given: R = (A, B, C) and S = (D, E, F) where A, B, C, D, E, and F are the attributes (columns). Write the SQL statements that will express each of the queries given below:
標簽: relational following tables given
上傳時間: 2014-01-14
上傳用戶:cx111111
Solve Ax=B with Crout s method
上傳時間: 2017-09-11
上傳用戶:許小華
b to b 模式 電子商務系統(tǒng) ,c# 開發(fā) , B/S結構
標簽: to 模式 電子商務系統(tǒng)
上傳時間: 2014-01-20
上傳用戶:hanli8870
~{JGR 8vQ IzWwR5SC5D2V?bD#DbO5M3~} ~{3v?b~} ~{Hk?b~} ~{2iQ/5H9&D\~} ~{?IRTWw@)3d~} ~{TZ~}JDK1.4.2~{OBM(9}~}
上傳時間: 2015-02-22
上傳用戶:ommshaggar
k-mean算法的源碼,對聚類非常有用!!可以直接使用!
上傳時間: 2016-04-21
上傳用戶:ZJX5201314
樣板 B 樹 ( B - tree ) 規(guī)則 : (1) 每個節(jié)點內元素個數(shù)在 [MIN,2*MIN] 之間, 但根節(jié)點元素個數(shù)為 [1,2*MIN] (2) 節(jié)點內元素由小排到大, 元素不重複 (3) 每個節(jié)點內的指標個數(shù)為元素個數(shù)加一 (4) 第 i 個指標所指向的子節(jié)點內的所有元素值皆小於父節(jié)點的第 i 個元素 (5) B 樹內的所有末端節(jié)點深度一樣
上傳時間: 2017-05-14
上傳用戶:日光微瀾
歐幾里德算法:輾轉求余 原理: gcd(a,b)=gcd(b,a mod b) 當b為0時,兩數(shù)的最大公約數(shù)即為a getchar()會接受前一個scanf的回車符
上傳時間: 2014-01-10
上傳用戶:2467478207
數(shù)據(jù)結構課程設計 數(shù)據(jù)結構B+樹 B+ tree Library
標簽: Library tree 數(shù)據(jù)結構 樹
上傳時間: 2013-12-31
上傳用戶:semi1981
How the K-mean Cluster work Step 1. Begin with a decision the value of k = number of clusters Step 2. Put any initial partition that classifies the data into k clusters. You may assign the training samples randomly, or systematically as the following: Take the first k training sample as single-element clusters Assign each of the remaining (N-k) training sample to the cluster with the nearest centroid. After each assignment, recomputed the centroid of the gaining cluster. Step 3 . Take each sample in sequence and compute its distance from the centroid of each of the clusters. If a sample is not currently in the cluster with the closest centroid, switch this sample to that cluster and update the centroid of the cluster gaining the new sample and the cluster losing the sample. Step 4 . Repeat step 3 until convergence is achieved, that is until a pass through the training sample causes no new assignments.
標簽: the decision clusters Cluster
上傳時間: 2013-12-21
上傳用戶:gxmm