feature scaling中文翻譯,feature scaling是什么意思,feature scaling發(fā)音、用法及例句
1、feature scaling
feature scaling發(fā)音
英: 美:
feature scaling中文意思翻譯
常見(jiàn)釋義:
特征縮放
feature scaling雙語(yǔ)使用場(chǎng)景
1、One feature I would like to add that would be generally useful, but critical for smartphone support is viewport and/or scaling support.───我想增加的一個(gè)非常有用的特性是視角與縮放支持,這對于智能手機來(lái)說(shuō)也非常關(guān)鍵。
2、In this letter, we focus on the feature scaling kernel where each feature individually associates with a scaling factor.───本文的討論集中在特征標度核,在此類(lèi)核函數中,每個(gè)特征都對應于一個(gè)獨立的尺度因子。
3、This is the feature that gives the topology its horizontal scaling.───正是這個(gè)特性使得該拓撲可以水平伸縮。
4、This approach allows for individual feature scaling versus scaling all five features.───該方法允許單個(gè)特性的擴展和所有五個(gè)特性的擴展。
feature scaling相似詞語(yǔ)短語(yǔ)
1、feature───vi.起重要作用;vt.特寫(xiě);以…為特色;由…主演;n.特色,特征;容貌;特寫(xiě)或專(zhuān)題節目
2、feature feature───特征
3、fractional scaling───分數比例
4、scaling bias───標度偏差
5、scaling question───縮放問(wèn)題
6、scaling calculator───比例計算器
7、scaling factor───比例因數; 換算系數; 計數遞減率; 標度因子;[數]比例因子;[數]換算系數
8、scaling up───按比例放大;按比例增加
9、feature story───n.深度報道;n.新聞特寫(xiě);專(zhuān)題報道
2、計算機架構頂會(huì )ISCA 2018有哪些新的AI芯片研究成果?
ISCA 是計算機架構領(lǐng)域的頂會(huì ),今年的 ISCA 不久之前在洛杉磯落幕。本次大會(huì )共收到 378 篇投稿,收錄 64 篇論文。在人工智能越來(lái)越火熱的今天,體系架構領(lǐng)域的會(huì )也有越來(lái)越多AI相關(guān)的論文,以下是清華大學(xué)微納電子系涂鋒斌博士在Github上統計的結果:
RANA: Towards Efficient Neural Acceleration with Refresh-Optimized Embedded DRAM. (THU)A Configurable Cloud-Scale DNN Processor for Real-Time AI. (Microsoft)PROMISE: An End-to-End Design of a Programmable Mixed-Signal Accelerator for Machine Learning Algorithms. (UIUC)Computation Reuse in DNNs by Exploiting Input Similarity. (UPC)GANAX: A Unified SIMD-MIMD Acceleration for Generative Adversarial Network. (Georiga Tech, IPM, Qualcomm, UCSD, UIUC)SnaPEA: Predictive Early Activation for Reducing Computation in Deep Convolutional Neural Networks.(UCSD, Georgia Tech, Qualcomm)UCNN: Exploiting Computational Reuse in Deep Neural Networks via Weight Repetition. (UIUC, NVIDIA)An Energy-Efficient Neural Network Accelerator based on Outlier-Aware Low Precision Computation. (Seoul National)Prediction based Execution on Deep Neural Networks. (Florida)Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks. (Georgia Tech, ARM, UCSD)Gist: Efficient Data Encoding for Deep Neural Network Training. (Michigan, Microsoft, Toronto)The Dark Side of DNN Pruning. (UPC)Neural Cache: Bit-Serial In-Cache Acceleration of Deep Neural Networks. (Michigan)EVA^2: Exploiting Temporal Redundancy in Live Computer Vision. (Cornell)Euphrates: Algorithm-SoC Co-Design for Low-Power Mobile Continuous Vision. (Rochester, Georgia Tech, ARM)Feature-Driven and Spatially Folded Digital Neurons for Efficient Spiking Neural Network Simulations. (POSTECH/Berkeley, Seoul National)Space-Time Algebra: A Model for Neocortical Computation. (Wisconsin)Scaling Datacenter Accelerators With Compute-Reuse Architectures. (Princeton)Enabling Scientific Computing on Memristive Accelerators. (Rochester)其中,第一篇就是清華大學(xué)的論文,也是今年大會(huì )中國唯一被收錄的署名第一完成單位的論文。感興趣的同學(xué)可從以參閱:https://www.jiqizhixin.com/articles/2018-06-12-8
版權聲明: 本站僅提供信息存儲空間服務(wù),旨在傳遞更多信息,不擁有所有權,不承擔相關(guān)法律責任,不代表本網(wǎng)贊同其觀(guān)點(diǎn)和對其真實(shí)性負責。如因作品內容、版權和其它問(wèn)題需要同本網(wǎng)聯(lián)系的,請發(fā)送郵件至 舉報,一經(jīng)查實(shí),本站將立刻刪除。