約克5匹空調(diào)故障代碼F2(約克家用空調(diào)故障代碼F6)

前沿拓展:


LG - 機器學習 CV - 計算機視覺 CL - 計算與語言 AS - 音頻與語音 RO - 機器人

1、[LG] *Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints

M Finzi, K A Wang, A G Wilson

[New York University & Cornell University]

通過顯式約束簡化哈密頓和拉格朗日神經(jīng)網(wǎng)絡(luò)。引入了一系列挑戰(zhàn)性的混沌和擴展體系統(tǒng),包括帶有N擺、彈簧耦合、磁場、剛性轉(zhuǎn)子和陀螺儀的系統(tǒng),以推進當前方法的極限。證明了笛卡爾坐標與顯式約束相結(jié)合,可以使物理系統(tǒng)的哈密頓量和拉格朗日更容易學習,將數(shù)據(jù)效率和軌跡預測精度提高了100倍。

Reasoning about the physical world requires models that are endowed with the right inductive biases to learn the underlying dynamics. Recent works improve generalization for predicting trajectories by learning the Hamiltonian or Lagrangian of a system rather than the differential equations directly. While these methods encode the constraints of the systems using generalized coordinates, we show that embedding the system into Cartesian coordinates and enforcing the constraints explicitly with Lagrange multipliers dramatically simplifies the learning problem. We introduce a series of challenging chaotic and extended-body systems, including systems with N-pendulums, spring coupling, magnetic fields, rigid rotors, and gyroscopes, to push the limits of current approaches. Our experiments show that Cartesian coordinates with explicit constraints lead to a 100x improvement in accuracy and data efficiency.

https://weibo.com/1402400261/JrdwR10nj

2、[LG] *Hyperparameter Ensembles for Robustness and Uncertainty Quantification

F Wenzel, J Snoek, D Tran, R Jenatton

[Google Research]

面向魯棒性和不確定性量化的超參數(shù)集成。提出了超深集成(hyper-deep ensembles),涉及對不同超參數(shù)的隨機搜索,在多個隨機初始化過程中分層。通過模型權(quán)重和超參數(shù)多樣性集成,實現(xiàn)了強大性能。在批量集成和自調(diào)優(yōu)網(wǎng)絡(luò)基礎(chǔ)上,進一步提出了參數(shù)高效的超批量集成(hyper-batch ensembles),計算和內(nèi)存成本明顯低于典型的深度集成方法。

Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration. The recently introduced batch ensembles provide a drop-in replacement that is more parameter efficient. In this paper, we design ensembles not only over weights, but over hyperparameters to improve the state of the art in both settings. For best performance independent of budget, we propose hyper-deep ensembles, a simple procedure that involves a random search over different hyperparameters, themselves stratified across multiple random initializations. Its strong performance highlights the benefit of combining models with both weight and hyperparameter persity. We further propose a parameter efficient version, hyper-batch ensembles, which builds on the layer structure of batch ensembles and self-tuning networks. The computational and memory costs of our method are notably lower than typical ensembles. On image classification tasks, with MLP, LeNet, ResNet 20 and Wide ResNet 28-10 architectures, we improve upon both deep and batch ensembles.

https://weibo.com/1402400261/JrdD3vjJd

3、[CV] Exemplary Natural Images Explain CNN Activations Better than Feature Visualizations

J Borowski, R S. Zimmermann, J Schepers, R Geirhos, T S. A. Wallis, M Bethge, W Brendel

[University of Tubingen]

典型自然圖像能比特征可視化更好地解釋CNN的激活。測量極端的激活圖在多大程度上幫人們預測出CNN的激活,用一種良好控制的心理-物理范式,將合成圖像的信息性與簡單基線可視化——可強烈激活特定特征圖的示范性自然圖像——進行比較,發(fā)現(xiàn)在評價CNN激活方面,特征可視化的流行合成圖像提供的信息,要比自然圖像少得多。

Feature visualizations such as synthetic maximally activating images are a widely used explanation method to better understand the information processing of convolutional neural networks (CNNs). At the same time, there are concerns that these visualizations might not accurately represent CNNs' inner workings. Here, we measure how much extremely activating images help humans to predict CNN activations. Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images (Olah et al., 2017) with a simple baseline visualization, namely exemplary natural images that also strongly activate a specific feature map. Given either synthetic or natural reference images, human participants choose which of two query images leads to strong positive activation. The experiment is designed to maximize participants' performance, and is the first to probe intermediate instead of final layer representations. We find that synthetic images indeed provide helpful information about feature map activations (82% accuracy; chance would be 50%). However, natural images-originally intended to be a baseline-outperform synthetic images by a wide margin (92% accuracy). Additionally, participants are faster and more confident for natural images, whereas subjective impressions about the interpretability of feature visualization are mixed. The higher informativeness of natural images holds across most layers, for both expert and lay participants as well as for hand- and randomly-picked feature visualizations. Even if only a single reference image is given, synthetic images provide less information than natural images (65% vs. 73%). In summary, popular synthetic images from feature visualizations are significantly less informative for assessing CNN activations than natural images. We argue that future visualization methods should improve over this simple baseline.

https://weibo.com/1402400261/JrdKubMU5

4、[CL] A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios

M A. Hedderich, L Lange, H Adel, J Str?tgen, D Klakow

[Saarland University & Bosch Center for Artificial Intelligence]

低資源自然語言處理新方法綜述。給出了在低資源自然語言處理領(lǐng)域最近工作的結(jié)構(gòu)化綜述,展示了跨數(shù)據(jù)可用性不同維度分析資源精益場景的重要性。

Current developments in natural language processing offer challenges and opportunities for low-resource languages and domains. Deep neural networks are known for requiring large amounts of training data which might not be available in resource-lean scenarios. However, there is also a growing body of works to improve the performance in low-resource settings. Motivated by fundamental changes towards neural models and the currently popular pre-train and fine-tune paradigm, we give an overview of promising approaches for low-resource natural language processing. After a discussion about the definition of low-resource scenarios and the different dimensions of data availability, we then examine methods that enable learning when training data is sparse. This includes mechanisms to create additional labeled data like data augmentation and distant supervision as well as transfer learning settings that reduce the need for target supervision. The survey closes with a brief look into methods suggested in non-NLP machine learning communities, which might be beneficial for NLP in low-resource scenarios

https://weibo.com/1402400261/JrdPk7hTR

5、[CL] An Industry Evaluation of Embedding-based Entity Alignment

Z Zhang, J Chen, X Chen, H Liu, Y Xiang, B Liu, Y Zheng

[Tencent Jarvis Lab & University of Oxford]

基于嵌入實體對齊的行業(yè)評價。在理想背景下和行業(yè)背景下,評價了四種最先進的基于嵌入的實體對齊方法,探討了不同大小和不同偏差的種子映射的影響。提出了一個新的行業(yè)基準測試,從為醫(yī)療應用部署的兩個異構(gòu)知識圖中提取而來。

Embedding-based entity alignment has been widely investigated in recent years, but most proposed methods still rely on an ideal supervised learning setting with a large number of unbiased seed mappings for training and validation, which significantly limits their usage. In this study, we evaluate those state-of-the-art methods in an industrial context, where the impact of seed mappings with different sizes and different biases is explored. Besides the popular benchmarks from DBpedia and Wikidata, we contribute and evaluate a new industrial benchmark that is extracted from two heterogeneous knowledge graphs (KGs) under deployment for medical applications. The experimental results enable the analysis of the advantages and disadvantages of these alignment methods and the further discussion of suitable strategies for their industrial deployment.

另外幾篇值得關(guān)注的論文:

[LG] OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning

OPAL:用離線原語發(fā)現(xiàn)加速離線強化學習

A Ajay, A Kumar, P Agrawal, S Levine, O Nachum

[MIT & Google Research]

https://weibo.com/1402400261/JrdXyFr0g

[LG] S2cGAN: Semi-Supervised Training of Conditional GANs with Fewer Labels

S2 GAN:少標簽的條件GAN半監(jiān)督訓練

A Chakraborty, R Ragesh, M Shah, N Kwatra

[Microsoft Research India]

https://weibo.com/1402400261/Jre1tkRst

[CL] FastFormers: Highly Efficient Transformer Models for Natural Language Understanding

FastFormers:面向自然語言理解的高效Transformer模型

Y J Kim, H H Awadalla

[Microsoft]

https://weibo.com/1402400261/Jre2b2Ky0

[LG] Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modelling

用多階段建模改善解纏表示學習器重構(gòu)

A Srivastava, Y Bansal, Y Ding, C Hurwitz, K Xu, B Egger, P Sattigeri, J Tenenbaum, D D. Cox, D Gutfreund

[IBM Research & Harvard University & University of Edinburgh & MIT]

https://weibo.com/1402400261/Jre4dfnBN

[RO] High Acceleration Reinforcement Learning for Real-World Juggling with Binary Rewards

用二元獎勵高加速強化學習實現(xiàn)真實世界中的雜耍

K Ploeger, M Lutter, J Peters

[Technical University of Darmstadt]

https://weibo.com/1402400261/Jre5EqhrG

[CV] APB2FaceV2: Real-Time Audio-Guided Multi-Face Reenactment

APB2FaceV2:音頻引導實時多人臉重塑

J Zhang, X Zeng, C Xu, J Chen, Y Liu, Y Jiang

[Zhejiang University & Huzhou University]

https://weibo.com/1402400261/Jre7t2vUX

[LG] Graph Information Bottleneck

圖信息瓶頸

T Wu, H Ren, P Li, J Leskovec

[Stanford University]

https://weibo.com/1402400261/Jre8V9XWq

[LG] XLVIN: eXecuted Latent Value Iteration Nets

XLVIN:執(zhí)行潛價值迭代網(wǎng)絡(luò)

A Deac, P Veli?kovi?, O Milinkovi?, P Bacon, J Tang, M Nikoli?

[Mila & DeepMind & University of Belgrade]

https://weibo.com/1402400261/Jrec0hABy

[CL] Long Document Ranking with Query-Directed Sparse Transformer

查詢導向稀疏Transformer長文檔排序

J Jiang, C Xiong, C Lee, W Wang

[Microsoft Research AI]

https://weibo.com/1402400261/Jredy5mZl

[CL] Automatically Identifying Words That Can Serve as Labels for Few-Shot Text Classification

少樣本文本分類標簽詞自動識別

T Schick, H Schmid, H Schütze

[LMU Munich]

https://weibo.com/1402400261/Jref2vbJj

拓展知識:

標題:約克5匹空調(diào)故障代碼F2(約克家用空調(diào)故障代碼F6)

地址:http://www.17168cn.cn/gzdm/8043.html