[ICML 2024]Learning High-Order Relationships of Brain Regions
论文网址:学习大脑区域的高阶关系 |打开评论
论文代码:GitHub - Graph-and-Geometric-Learning/HyBRiD: Official implementation of paper "Learning High-Order Relationships of Brain Regions" [ICML 2024]
英文是纯手打的!论文原文的summarizing and paraphrasing。可能会出现难以避免的拼写错误和语法错误,若有发现欢迎评论指正!文章偏向于笔记,谨慎食用
目录
1. 心得
2. 论文逐段精读
2.1. Abstract
2.2. Introduction
2.3. Problem Definition & Notations
2.4. Related Work
2.5. Method
2.5.1. Learning the Hypergraph by Multi-head Masking
2.5.2. Optimization Framework
2.6. Experiments
2.6.1. Predictive Performance of Hypereges
2.6.2. Further Analysis
2.7. Conclusion
3. Reference
1. 心得
(1)oops哥们儿小标题太多了
(2)插播一个翻译笑话,因此告诫大家平时要多看英文喔:
(3)换了一个好少见的指标评估方式
2. 论文逐段精读
2.1. Abstract
①Existing problems: ignores the focus on high order relationship but only consider pairwise connection
②They reckon that the high order relations are maximally informative and minimally redundant (MIMR), so they proposed HYBRID
2.2. Introduction
①Hypergraph:
②Existing information bottleneck (IB) methods in MIMR prefer to extract the compressed representations of input rather than identifying the underlying structure of hyper graph
③They proposed Hypergraph of Brain Regions via multi-head Drop-bottleneck (HYBRID) and the pipeline shows:
2.3. Problem Definition & Notations
(1)Input
①The subject's feature and its phenotype can be paired as , where denotes features come from Pearson correlation, is the number of brain region, denotes the dimension of features, denotes phenotype
(2)Goal
①HYBRID identified the hyper graph set and assigned weights for them
(3)Representation of Hyperedges
①They represent hyper edges by:
where denotes mask vector, denotes broad-casting element wise multiplication
②“换句话说,每个都是的随机归零版本。”(在方法部分解释了具体的)
2.4. Related Work
(1)Hypergraph Construction
①They think methods like nearest neighbors or regulatization caused different number of hyperedges in different instances
②Clustering or iteration methods cannot learn MIMR hyperedges
(2)High-Order Relationships in fMRI
①Relevant methods describe the synergy and redundancy instead of measure the most informative methods for cognition score
(3)Information Bottleneck
①HYBRID prefers to identify the underlying structures
(4)Connectivity-based Phenotypic Prediction
①GNN relies on pairwise connections
2.5. Method
(1)Method Overview
①The pipeline of HYBRID:
这个是之前说过的,就是是CONSTRUCTOR,构建出来超图,然后是WEIGHTER给每个超边权重,是LINEARHEAD拿来预测label 的
2.5.1. Learning the Hypergraph by Multi-head Masking
①Instance can be represented as
(1)Hyperedges Construction
①Task function:
②The predefined number of hyperedges:
③Assign a head to each hyperedge and construct hyper edges:
where are learnable probabilities, is indicator function:
and denotes column vector corresponding to the -th hyperedge and approximates it by stop gradient technique
④So the hyperedges are:
where denotes the -th element of the vector . Combining them to:
(2)Hyperedge Weighting
①Task function:
②The weight of hyperedges obtained by sum all the correlated nodes fetures up and excute a dimension reduction:
where is the weight of the -th hyperedge, , DimReduction is MLP with ReLU activations with output dimension equals to 1
③The linear head:
(3)Computational Complexity
①The computational complexity of HYBRID: , the same as MLP(芥末屌)
2.5.2. Optimization Framework
①By reckoning , and are random variables in the Markovian chain as , they optimize:
where denotes mutual information, so denotes informativeness and denotes redundancy, is a hyperparameter for trading
②"Since optimizing the mutual information for high-dimensional continuous variables is intractable, we instead optimize the lower bound of above equation":
where denotes entropy computation, they aim to optimize component in the second term, is a model to predict based on , denotes function composition. They define the as a Gaussian model with variance
解释:
是 的熵,表示 的不确定性,是在给定 的条件下 的条件熵,表示在已知 的情况下, 的不确定性。互信息 可以看作是 的总不确定性减去在给定 时的剩余不确定性。优化互信息的目标是减少条件熵(即减少在知道 后对 的不确定性)。
是对于联合分布下,条件概率的对数似然的期望值。也就是说,这一项衡量了 给定时 的可能性(即 对 的信息量)。
在这一步中,条件概率 被替换为了一个近似模型 ,其中 是模型的参数。通常这种近似方法被用来推导出可优化的目标,尤其是在变分推理中。是一个我们学习的模型,旨在预测给定 时的 。这可以视为通过某个神经网络(或其他模型)来表示的条件概率。而KL散度(Kullback-Leibler散度),它度量了真实条件概率 与近似条件概率 之间的差异。KL散度的形式是:
这表示在优化过程中需要考虑到真实分布与近似模型之间的偏差。如果和不相等,KL散度会给出一个正的值,这意味着需要减小这种差异。
是最终的下界,通过变分推理的方式得到了一个优化目标。KL散度始终大于等于零。想要最大化 ,但实际上只能通过最小化 来优化近似模型 ,因为第二项(关于KL散度的期望)是非负的。
(1)Proposition 4.1.
①For the redundancy term:
the equation holds if and only if nodes are independent and hyperedges do not overlap.
我的补充解释(猜测好吧):
Ⅰ. 互信息量与总信息量
在这个公式中,代表了系统和之间的互信息。互信息反映了和之间共享的信息量,具体地,它量化了中节点特征信息对超图特征的预测能力。
公式开始的部分是:
这一步可以理解为总互信息的上界。它表示如果将系统分成个部分(每个部分用表示),那么总互信息的上界可以通过将每个部分的互信息相加得到。这里假设互信息具有子加性,即整体的互信息不大于单独部分的互信息之和。
Ⅱ. 节点的互信息
这里将系统的每个部分再细分为多个节点。换句话说,系统被划分为个超边,每个超边中包含个节点。公式显示,每个超边的互信息可以被分解为每个节点的互信息。此时,互信息的计算从超边级别转移到节点级别。这步转换表示将每个超边的整体信息量进一步细化为其组成部分(即节点)的信息量。
Ⅲ. 计算节点间的互信息
量化了节点的“信息量”或“随机性”。这一步的右边是基于节点的熵和概率计算得到的表达式。
Ⅳ. 公式成立的条件
节点独立性:公式中的每个节点互信息被独立计算,这意味着节点之间应该是独立的。独立性假设意味着一个节点的信息不会影响其他节点的信息,从而可以将整个系统的信息量分解为每个节点的信息量之和。
超边不重叠:超边之间的关系是通过节点的组合来体现的。如果超边之间有重叠(即某些节点在多个超边中同时出现),那么互信息的计算就不再可以简单地分解成每个节点的互信息的和。这是因为超边的重叠会引入额外的依赖关系,从而使得节点之间的互信息不再可以独立计算。
②Then the loss function will be:
2.6. Experiments
2.6.1. Predictive Performance of Hypereges
(1)Datasets
①ABIDE: 直接说官方预处理版本吗哈哈哈哈哈哈好吧虽然也不是不行 with Craddock 200 atlas. They used FIQ, VIQ and PIQ to predict
②ABCD: they chose baseline (release 2.0) and the 2-year followup (release 3.0), obtaining 8 sub-datasets from 2 timepoints under 4 tasks with AAL3v1. The prediction task is aim to FIQ
③Region features: Pearson correlation
(2)Evaluation Metric
①⭐Hyperedge quality assessment method: CPM(代码连接:GitHub - 耶鲁MRRC/CPM). The formula of CPM:
where denotes the pairwise edge weights and denotes the total number of pairwise edges
②Due to the authors use hyperedge they change the formula to:
(3)Baselines
①They compare their model with standard methods, hypergraph construction methods, and connectivity-based phenotypic prediction methods
(4)Implementation & Training Details
①Hyperparameter setting:
(5)Results
① values comparison table on ABCD:
② values comparison table on ABIDE:
(6)Runtime
①很快,但细节在附录J,我懒得翻了
(7)Ablation Studies
①Module ablation:
HYBRIDNoMask | Do not mask at all, which means all nodes and their features are visible to each head |
HYBRIDRndMask | Replace the learnable masks with random masks with the same sparsity, initialized at the beginning of training |
HYBRIDSoftMask | Remove the indicator function and use directly in |
(8)Additional Experiments on Synthetic Dataset
①“由于在面向特定结果的学习信息超边缘的真实数据集中没有基真超边缘,因此我们构建了一个合成数据集来验证我们的模型是否可以在MIMR目标下恢复正确的超边缘结构。我们使用精度、召回率和F1分数来衡量学习到的超边缘相对于基本事实的正确性。尽管仅在任务标签的监督下学习超边缘是具有挑战性的,但我们发现我们的模型达到了高性能,与最强基线相比,F1得分平均提高了28.3%。有关合成实验的详细信息可在附录G中找到。”以下是我扒来的G:
(9)Additional Experiments on Model Fit
①Appendix H contains the model’s goodness
2.6.2. Further Analysis
①All on ABCD due to the larger scale
(1)Hyperedge Degree Distribution
①Hyperedge degree distribution:
(2)Hyperedges with Higher Degree are More Significant
①The relationship between hyperedge degree and its significance:
(3)High-order relationships are Better than Pairwise Ones
①The number of remaining edges under different thresholds:
there are significantly more important hyperedges than paired edges
(4)Hyperedge Case Study
①Important hyperedges:
2.7. Conclusion
①Limitation: they do not consider the dynamic hyperedges
3. Reference
Qiu, W. et al (2024) 'Learning High-Order Relationships of Brain Regions', ICML. doi: https://doi.org/10.48550/arXiv.2312.02203