DoRA: Weight-Decomposed Low-Rank Adaptation (2024)

Shih-Yang Liu  Chien-Yi Wang  Hongxu Yin  Pavlo Molchanov  Yu-Chiang Frank Wang  Kwang-Ting Cheng  Min-Hung Chen

Abstract

Among the widely used parameter-efficient fine-tuning (PEFT) methods, LoRA and its variants have gained considerable popularity because of avoiding additional inference costs. However, there still often exists an accuracy gap between these methods and full fine-tuning (FT). In this work, we first introduce a novel weight decomposition analysis to investigate the inherent differences between FT and LoRA. Aiming to resemble the learning capacity of FT from the findings,we propose Weight-Decomposed Low-Rank Adaptation(DoRA). DoRAdecomposes the pre-trained weight into two components, magnitude and direction, for fine-tuning, specifically employing LoRA for directional updates to efficiently minimize the number of trainable parameters. By employing DoRA, we enhance both the learning capacity and training stability of LoRA while avoiding any additional inference overhead. DoRAconsistently outperforms LoRA on fine-tuning LLaMA, LLaVA, and VL-BART on various downstream tasks, such as commonsense reasoning, visual instruction tuning, and image/video-text understanding.

Machine Learning, ICML

1 Introduction


DoRA: Weight-Decomposed Low-Rank Adaptation (1)

Models that are pre-trained with extensive general domain datasets have demonstrated remarkable generalization abilities, significantly benefiting a wide array of applications, from natural language processing (NLP) tasks (Qin etal., 2023; Taori etal., 2023) to multi-modal tasks (Li etal., 2022; Liu etal., 2023a). To tailor these general models for specific downstream tasks, full fine-tuning (FT) is commonly employed, involving the retraining of all model parameters. Nevertheless, as the size of models and datasets expand in scale, the expense associated with fine-tuning the entire model becomes prohibitively large.

To address this issue, parameter-efficient fine-tuning (PEFT) methods (Houlsby etal., 2019) have been introduced to fine-tune the pre-trained models with only a minimal number of parameters. Among these, LoRA (Hu etal., 2022), which does not change the model architecture, has become notably popular for its simplicity and efficacy. Nevertheless, there is still a capacity gap between LoRA and FT, which is often attributed to the limited number of trainable parameters without further exploration of other underlying causes(Hu etal., 2022; Kopiczko etal., 2024).

Drawing on Weight Normalization(Salimans & Kingma, 2016), which achieves faster convergence via improving the conditioning of the gradient with weight reparameterization, we introduce a novel weight decomposition analysis that initially reparameterizes model weights into magnitude and directional components, subsequently examining the changes in magnitude and direction introduced by LoRA and FT. Our analysis reveals that LoRA and FT exhibit markedly distinct patterns of updates, leading us to surmise that these variations mirror the learning capability of each method. Inspired by our findings, we propose Weight-Decomposed Low-Rank Adaptation(DoRA), which begins by decomposing the pre-trained weight into its magnitude and directional components, then fine-tunes both. Given the substantial size of the directional component in terms of parameters, we exploit LoRA for the directional adaptation to enable efficient fine-tuning, as illustrated in Figure.1.Moreover, by showing a learning behavior similar to FT both empirically and mathematically, suggesting a learning capacity closely resembling FT, we have validated DoRAacross a wide variety of tasks, from NLP to Vision-Language, and over various backbones, including LLM and LVLM. The experimental results show that DoRAconsistently outperforms LoRA without sacrificing inference efficiency, such as commonsense reasoning (+3.4/+1.0 on LLaMA-7B/13B), visual instruction tuning (+0.6 on LLaVA-7B), and image/video-text understanding (+0.9/+1.9 on VL-BART).

The summary of our contributions is as follows:

  • We introduce DoRA, a novel PEFT method that incorporates weight decomposition, achieving a learning capacity closely resembling FT without any additional inference latency over LoRA.

  • We introduce a novel weight decomposition analysis to uncover the fundamental differences in the learning patterns of FT and different PEFT methods.

  • DoRAconsistently surpasses LoRA on various tasks, from NLP to Vision-Language benchmarks and across various backbones, including LLM and LVLM.

2 Related Works

Parameter-Efficient Fine-Tuning (PEFT) methods are designed to reduce the high expense of fine-tuning large-scale models. They achieve this by training a relatively small subset of parameters, compared to the total number of parameters, for adapting to downstream tasks. Existing PEFT methods can be divided into three categories. The first category is referred to as Adapter-based methods, which involve introducing additional trainable modules into the original frozen backbone, such as (Houlsby etal., 2019; He etal., 2021; KarimiMahabadi etal., 2021; mahabadi etal., 2021). For example, (Houlsby etal., 2019) proposes adding linear modules in sequence to the existing layer, whereas (He etal., 2021) advocates for integrating these modules in parallel with the original layer to enhance performance. The second category is Prompt-based methods. These methods add extra soft tokens (prompts) to the initial input and focus solely on fine-tuning these trainable vectors, as seen in works like (Lester etal., 2021; Razdaibiedina etal., 2023; Wang etal., 2023). However, these approaches typically face challenges due to their sensitivity to initialization, affecting their overall effectiveness. These first two categories, whether altering the model’s input or architecture, result in increased inference latency compared to the baseline model.

LoRA(Hu etal., 2022) and its variants are among the third category of PEFT, notable for not adding any extra inference burden. These methods apply low-rank matrices to approximate weight changes during fine-tuning and can merge with pre-trained weights prior to inference. For example, (Zhang etal., 2023) employs SVD decomposition and prunes less significant singular values for more efficient updates. (Hyeon-Woo etal., 2022) focuses on low-rank Hadamard product for federated learning. (Qiu etal., 2023; Liu etal., 2023b) exploit orthogonal factorization in fine-tuning diffusion models. (Renduchintala etal., 2023) uses weight tying to further reduce the trainable parameters. (Yeh etal., 2023) introduces a unified LoRA family framework for Stable diffusion. (Ponti etal., 2022) chooses different combinations of LoRAs from the inventory with a routing function for different tasks. (Kopiczko etal., 2024) implements learnable scaling vectors to adjust a shared pair of frozen random matrices across layers. Our research also falls within this third category, and we validate the efficacy of our proposed method alongside LoRA and its variants through comprehensive experimentation.

3 Pattern Analysis of LoRA and FT

3.1 Low-Rank Adaptation (LoRA)

Building upon the hypothesis that updates made during the fine-tuning exhibit a low “intrinsic rank”, LoRA (Hu etal., 2022) proposes using the product of two low-rank matrices to update the pre-trained weights incrementally. For a pre-trained weight matrix W0d×ksubscript𝑊0superscript𝑑𝑘W_{0}\in\mathbb{R}^{d\times k}, LoRA models the weight update ΔWd×kΔ𝑊superscript𝑑𝑘\Delta W\in\mathbb{R}^{d\times k} utilizing a low-rank decomposition, expressed as BA𝐵𝐴BA, where Bd×r𝐵superscript𝑑𝑟B\in\mathbb{R}^{d\times r} and Ar×k𝐴superscript𝑟𝑘A\in\mathbb{R}^{r\times k} represent two low-rank matrices, with rmin(d,k)much-less-than𝑟𝑚𝑖𝑛𝑑𝑘r\ll min(d,k). Consequently, the fine-tuned weight Wsuperscript𝑊W^{\prime} can be represented as:

W=W0+ΔW=W0+BA¯superscript𝑊subscript𝑊0Δ𝑊subscript𝑊0¯𝐵𝐴W^{\prime}=W_{0}+\Delta W=W_{0}+\underline{BA}(1)

where W0subscript𝑊0W_{0} remains static during the fine-tuning process, and the underlined parameters are being trained. The matrix A𝐴A is initialized with uniform Kaiming distribution(He etal., 2015), while B𝐵B is initially set to zero, resulting in ΔW=BAΔ𝑊𝐵𝐴\Delta W=BA being zero at the start of training. Notably, this decomposition of ΔWΔ𝑊\Delta W can be substituted with other LoRA variants, such as VeRA(Kopiczko etal., 2024).Additionally, based on Eq.(1), we can merge the learned ΔWΔ𝑊\Delta W with the pre-trained weight W0subscript𝑊0W_{0} and obtain Wsuperscript𝑊W^{\prime} in advance of deployment, and given that both Wsuperscript𝑊W^{\prime} and W0subscript𝑊0W_{0} both fall within the dimensionality of d×ksuperscript𝑑𝑘\mathbb{R}^{d\times k}, LoRA and its related variants do not introduce any extra latency during the inference compared to the original model.


DoRA: Weight-Decomposed Low-Rank Adaptation (2)

3.2 Weight Decomposition Analysis

The study presented in LoRA(Hu etal., 2022) suggests that LoRA can be considered a general approximation of full fine-tuning. By gradually increasing the rank r𝑟r of LoRA to align with the rank of pre-trained weights, LoRA can attain a level of expressiveness akin to that of FT. Consequently, many previous studies have attributed the discrepancy in accuracy between LoRA and FT primarily to the limited number of trainable parameters, often without further analysis(Hu etal., 2022; Kopiczko etal., 2024). Drawing inspiration from Weight Normalization(Salimans & Kingma, 2016), which reparameterizes the weight matrix into magnitude and direction for accelerating optimization, we introduce an innovative weight decomposition analysis. Our analysis restructures the weight matrix into two separate components, magnitude and direction, to reveal the inherent differences in LoRA and FT learning patterns.

Analysis Method: This analysis examines the updates in both magnitude and direction of the LoRA and FT weights relative to the pre-trained weights to reveal the fundamental differences in the learning behaviors of both. The weight decomposition of Wd×k𝑊superscript𝑑𝑘W\in\mathbb{R}^{d\times k} can be formulated as:

W=mVVc=WcWWc𝑊𝑚𝑉subscriptnorm𝑉𝑐subscriptnorm𝑊𝑐𝑊subscriptnorm𝑊𝑐W=m\frac{V}{||V||_{c}}=||W||_{c}\frac{W}{||W||_{c}}(2)

where m1×k𝑚superscript1𝑘m\in\mathbb{R}^{1\times k} is the magnitude vector, Vd×k𝑉superscript𝑑𝑘V\in\mathbb{R}^{d\times k} is the directional matrix, with ||||c||\cdot||_{c} being the vector-wise norm of a matrix across each column. This decomposition ensures that each column of V/Vc𝑉subscriptnorm𝑉𝑐V/||V||_{c} remains a unit vector, and the corresponding scalar in m𝑚m defines the magnitude of each vector.

For our weight decomposition analysis, we select the VL-BART model fine-tuned on four image-text tasks as outlined in (Sung etal., 2022) for a case study. Following (Sung etal., 2022), which applies LoRA only to the query/value weight matrix in the self-attention module. We decompose the pre-trained weight W0subscript𝑊0W_{0}, the full fine-tuned weight WFTsubscript𝑊FTW_{\text{FT}}, and the merged LoRA weight WLoRAsubscript𝑊LoRAW_{\text{LoRA}} of query/value weight matrix using Eq.(2). The magnitude and directional variations between W0subscript𝑊0W_{0} and WFTsubscript𝑊FTW_{\text{FT}} can be defined as follows:

ΔMFTt=n=1k|mFTn,tm0n|kΔsuperscriptsubscript𝑀FT𝑡superscriptsubscript𝑛1𝑘superscriptsubscript𝑚FT𝑛𝑡superscriptsubscript𝑚0𝑛𝑘\displaystyle\Delta M_{\text{FT}}^{t}=\frac{\sum_{n=1}^{k}|m_{\text{FT}}^{n,t}-m_{0}^{n}|}{k}(3)
ΔDFTt=n=1k(1𝐜𝐨𝐬(VFTn,t,W0n))kΔsuperscriptsubscript𝐷FT𝑡superscriptsubscript𝑛1𝑘1𝐜𝐨𝐬superscriptsubscript𝑉FT𝑛𝑡superscriptsubscript𝑊0𝑛𝑘\displaystyle\Delta D_{\text{FT}}^{t}=\frac{\sum_{n=1}^{k}(1-\mathbf{cos}(V_{\text{FT}}^{n,t},W_{0}^{n}))}{k}(4)

Here, ΔMFTtΔsuperscriptsubscript𝑀FT𝑡\Delta M_{\text{FT}}^{t} and and ΔDFTtΔsuperscriptsubscript𝐷FT𝑡\Delta D_{\text{FT}}^{t} represent the magnitude difference and directional difference between W0subscript𝑊0W_{0} and WFTsubscript𝑊FTW_{\text{FT}} at t𝑡t training step respectively, with 𝐜𝐨𝐬(,)𝐜𝐨𝐬\mathbf{cos}(\cdot,\cdot) being the cosine similarity function. MFTn,tsuperscriptsubscript𝑀FT𝑛𝑡M_{\text{FT}}^{n,t} and M0nsuperscriptsubscript𝑀0𝑛M_{0}^{n} are the nthsuperscript𝑛𝑡n^{th} scalars in their respective magnitude vectors, while VFTn,tsuperscriptsubscript𝑉FT𝑛𝑡V_{\text{FT}}^{n,t} and W0nsuperscriptsubscript𝑊0𝑛W_{0}^{n} are the nthsuperscript𝑛𝑡n^{th} columns in VFTtsuperscriptsubscript𝑉FT𝑡V_{\text{FT}}^{t} and W0subscript𝑊0W_{0}. The magnitude and directional differences between WLoRAsubscript𝑊LoRAW_{\text{LoRA}} and W0subscript𝑊0W_{0} are calculated similarly, as per Eq.(3) and Eq.(4). We select checkpoints from four different training steps for analysis, comprising three intermediate steps and the final checkpoint from both FT and LoRA, and we perform weight decomposition analysis on each of these checkpoints to determine the ΔMΔ𝑀\Delta M and ΔDΔ𝐷\Delta D throughout different layers.

Analysis Results: Figure 2 (a) and (b) illustrate the alterations in the query weight matrix of FT and LoRA, with each point representing a (ΔDtΔsuperscript𝐷𝑡\Delta D^{t}, ΔMtΔsuperscript𝑀𝑡\Delta M^{t}) pair from query weight matrices across different layers and training steps. Similarly, Figure 5 in the appendix displays the value weight matrix modifications. It is noticeable that LoRA exhibits a consistent positive slope trend across all the intermediate steps, signifying a proportional relationship between the changes in direction and magnitude. In contrast, the FT displays a more varied learning pattern with a relatively negative slope. This distinction between FT and LoRA likely mirrors their respective learning capability. While LoRA tends to either increase or decrease the magnitude and direction updates proportionally, it lacks the nuanced capability for more subtle adjustments. Specifically, LoRA does not show proficiency in executing slight directional changes alongside more significant magnitude alterations, or vice versa, a feature more characteristic of the FT method. We suspect that such limitation of LoRA might stem from the challenge of concurrent learning both magnitude and directional adaptation, which could be overly complex for LoRA. Consequently, in this work, we aim to propose a variant of LoRA that exhibits a learning pattern more closely resembling that of FT, and can improve the learning capacity over LoRA.

4 Method

4.1 Weight-Decomposed Low-Rank Adaptation

Drawing from the insights of our weight decomposition analysis, we introduce Weight-Decomposed Low-Rank Adaptation(DoRA). DoRAinitially decomposes the pre-trained weight into its magnitude and directional components and finetunes both of them. Because the directional component is large in terms of parameter numbers, we further decompose it with LoRA for efficient finetuning.

Our intuitions are two-fold. Firstly, we believe that limiting LoRA to concentrate exclusively on directional adaptation while also allowing the magnitude component to be tunable simplifies the task compared to the original approach, where LoRA is required to learn adjustments in both magnitude and direction. Secondly, the process of optimizing directional updates is made more stable through weight decomposition, which we delve into more thoroughly in Section.4.2. It is important to highlight that the main distinction between DoRAand weight normalization (Salimans & Kingma, 2016) lies in their training approaches. Weight normalization trains both components from scratch, making the method sensitive to different initializations. Conversely, DoRAavoids such initialization concerns since both components begin with pre-trained weights. We initialize DoRAwith pre-trained weight W0subscript𝑊0W_{0} as outlined in Eq.(2), where m=W0c𝑚subscriptnormsubscript𝑊0𝑐m=||W_{0}||_{c} and V=W0𝑉subscript𝑊0V=W_{0} after initialization. We then keep V𝑉V frozen and m𝑚m a trainable vector. The directional component is then updated through LoRA.DoRAcan be formulated similar to Eq.(1) as:

W=m¯V+ΔVV+ΔVc=m¯W0+BA¯W0+BA¯csuperscript𝑊¯𝑚𝑉Δ𝑉subscriptnorm𝑉Δ𝑉𝑐¯𝑚subscript𝑊0¯𝐵𝐴subscriptnormsubscript𝑊0¯𝐵𝐴𝑐W^{\prime}=\underline{m}\frac{V+\Delta V}{||V+\Delta V||_{c}}=\underline{m}\frac{W_{0}+\underline{BA}}{||W_{0}+\underline{BA}||_{c}}(5)

where ΔVΔ𝑉\Delta V is the incremental directional update learned by multiplying two low-rank matrices B𝐵B and A𝐴A, and the underlined parameters denote the trainable parameters. The matrices Bd×r𝐵superscript𝑑𝑟B\in\mathbb{R}^{d\times r} and Ar×k𝐴superscript𝑟𝑘A\in\mathbb{R}^{r\times k} are initialized in line with LoRA’s strategy to ensure that Wsuperscript𝑊W^{\prime} equals W0subscript𝑊0W_{0} before the finetuning. Furthermore, DoRAcan be merged with the pre-trained weight before inference, thereby not introducing any additional latency.

We visualize the magnitude and directional differences of the query weight matrix between the merged DoRAweight and W0subscript𝑊0W_{0} in the same setting as for FT and LoRA in Figure2 (c) and leave the visualization of the value weight matrix in the appendix. From the regression line for (ΔD,ΔM)Δ𝐷Δ𝑀(\Delta D,\Delta M) of both DoRAand FT, we reveal that in contrast to LoRA’s pattern, DoRA, and FT are characterized by a distinct negative slope. We reason that FT tends towards a negative slope because pre-trained weights already possess substantial knowledge suitable for various downstream tasks. Therefore, when provided with adequate learning capacity, having a larger magnitude or direction alteration alone is sufficient enough for downstream adaptation. We additionally compute the correlation between ΔDΔ𝐷\Delta D and ΔMΔ𝑀\Delta M for FT, LoRA, and DoRA, and we found that both FT and DoRAexhibit negative correlation values of -0.62 and -0.31, respectively. In contrast, LoRA shows a positive correlation with a value of 0.83. In conclusion, the fact that DoRAdemonstrates the ability to make only substantial directional adjustments with relatively minimal changes in magnitude or the reverse while showing learning patterns closer to FT’s signifies its superior learning capacity over LoRA.

4.2 Gradient Analysis of DoRA

In this section, we first derive the gradient of DoRAand illustrate how our proposed decomposition benefits the optimization of ΔVΔ𝑉\Delta V. Subsequently, we analyze from the gradient’s perspective to explicate the learning pattern of DoRA, which tends to have a negative slope.

From Eq.(5), we can obtain the gradient of Loss \mathcal{L} with respect to m𝑚m and V=V+ΔVsuperscript𝑉𝑉Δ𝑉V^{\prime}=V+\Delta V as:

Vsubscriptsuperscript𝑉\displaystyle\nabla_{V^{\prime}}\mathcal{L}=mVc(IVV𝐓Vc2)Wabsent𝑚subscriptnormsuperscript𝑉𝑐𝐼superscript𝑉superscript𝑉𝐓superscriptsubscriptnormsuperscript𝑉𝑐2subscriptsuperscript𝑊\displaystyle=\frac{m}{||V^{\prime}||_{c}}\left(I-\frac{V^{\prime}V^{\prime\mathbf{T}}}{||V^{\prime}||_{c}^{2}}\right)\nabla_{W^{\prime}}\mathcal{L}(6)
msubscript𝑚\displaystyle\nabla_{m}\mathcal{L}=WVVcabsentsubscriptsuperscript𝑊superscript𝑉subscriptnormsuperscript𝑉𝑐\displaystyle=\frac{\nabla_{W^{\prime}}\mathcal{L}\cdot V^{\prime}}{||V^{\prime}||_{c}}(7)

Eq.(6) reveals that the weight gradient Wsubscriptsuperscript𝑊\nabla_{W^{\prime}}\mathcal{L} is scaled by m/Vc𝑚subscriptnormsuperscript𝑉𝑐m/||V^{\prime}||_{c} and is projected away from the current weight matrix. These two effects contribute to aligning the gradient’s covariance matrix more closely with the identity matrix, which is advantageous for optimization (Salimans & Kingma, 2016). Additionally, given that V=V+ΔVsuperscript𝑉𝑉Δ𝑉V^{\prime}=V+\Delta V, the gradient VLsubscriptsuperscript𝑉𝐿\nabla_{V^{\prime}}L is equivalent to ΔVLsubscriptΔ𝑉𝐿\nabla_{\Delta V}L. Therefore, the optimization benefits derived from this decomposition are fully transferred to ΔVΔ𝑉\Delta V, enhancing the learning stability of LoRA.

We can gain further insight into the learning pattern of DoRAby referring to Eq.(7). In the subsequent discussion, we represent vectors using lower-case letters instead of the previous matrix form notation. Consider w′′=w+Δwsuperscript𝑤′′superscript𝑤Δ𝑤w^{\prime\prime}=w^{\prime}+\Delta w as the parameter update for a weight vector, where Δwwproportional-toΔ𝑤subscriptsuperscript𝑤\Delta w\propto\nabla_{w^{\prime}}\mathcal{L}. In two hypothetical update scenarios, S1𝑆1S1 and S2𝑆2S2, S1𝑆1S1 involves a smaller directional update (ΔDS1Δsubscript𝐷𝑆1\Delta D_{S1}), while S2𝑆2S2 involves a larger one (ΔDS2Δsubscript𝐷𝑆2\Delta D_{S2}). Assuming ΔwS1=ΔwS2normΔsubscript𝑤𝑆1normΔsubscript𝑤𝑆2||\Delta w_{S1}||=||\Delta w_{S2}||, and at time 0, we have Δv=0Δ𝑣0\Delta v=0 and v=vsuperscript𝑣𝑣v^{\prime}=v. From ΔDS1<ΔDS2Δsubscript𝐷𝑆1Δsubscript𝐷𝑆2\Delta D_{S1}<\Delta D_{S2}, it follows that |𝐜𝐨𝐬(ΔwS1,w)|>|𝐜𝐨𝐬(ΔwS2,w)|𝐜𝐨𝐬Δsubscript𝑤𝑆1superscript𝑤𝐜𝐨𝐬Δsubscript𝑤𝑆2superscript𝑤|\mathbf{cos}(\Delta w_{S1},w^{\prime})|>|\mathbf{cos}(\Delta w_{S2},w^{\prime})|. Since Δwwproportional-toΔ𝑤subscriptsuperscript𝑤\Delta w\propto\nabla_{w^{\prime}}\mathcal{L}, it implies |𝐜𝐨𝐬(wS1,w)|>|𝐜𝐨𝐬(wS2,w)|𝐜𝐨𝐬superscriptsubscriptsuperscript𝑤𝑆1superscript𝑤𝐜𝐨𝐬superscriptsubscriptsuperscript𝑤𝑆2superscript𝑤|\mathbf{cos}(\nabla_{w^{\prime}}^{S1}\mathcal{L},w^{\prime})|>|\mathbf{cos}(\nabla_{w^{\prime}}^{S2}\mathcal{L},w^{\prime})|. From Sec4.1, with v𝑣v initialized as v0subscript𝑣0v_{0} and w=w0superscript𝑤subscript𝑤0w^{\prime}=w_{0} at time 0, we get |𝐜𝐨𝐬(w,w)|=|𝐜𝐨𝐬(w,v)|=|𝐜𝐨𝐬(w,v)|𝐜𝐨𝐬subscriptsuperscript𝑤superscript𝑤𝐜𝐨𝐬subscriptsuperscript𝑤superscript𝑣𝐜𝐨𝐬subscriptsuperscript𝑤𝑣|\mathbf{cos}(\nabla_{w^{\prime}}\mathcal{L},w^{\prime})|=|\mathbf{cos}(\nabla_{w^{\prime}}\mathcal{L},v^{\prime})|=|\mathbf{cos}(\nabla_{w^{\prime}}\mathcal{L},v)|. Using the cosine similarity equation with Δv=0Δ𝑣0\Delta v=0:

cos(w,v)=cos(w,v)=wvwv𝑐𝑜𝑠subscriptsuperscript𝑤superscript𝑣𝑐𝑜𝑠subscriptsuperscript𝑤𝑣subscriptsuperscript𝑤𝑣normsubscriptsuperscript𝑤norm𝑣cos(\nabla_{w^{\prime}}\mathcal{L},v^{\prime})=cos(\nabla_{w^{\prime}}\mathcal{L},v)=\frac{\nabla_{w^{\prime}}\mathcal{L}\cdot v}{||\nabla_{w^{\prime}}\mathcal{L}||||v||}(8)

denote msubscript𝑚m_{*} as the magnitude scalar of vector wsuperscript𝑤w^{\prime}then Eq.(7) w.r.t msubscript𝑚m_{*} can be rewritten to:

m=wvv=wcos(w,v)subscriptsubscript𝑚subscriptsuperscript𝑤superscript𝑣normsuperscript𝑣normsubscriptsuperscript𝑤𝑐𝑜𝑠subscriptsuperscript𝑤𝑣\nabla_{m_{*}}\mathcal{L}=\frac{\nabla_{w^{\prime}}\mathcal{L}\cdot v^{\prime}}{||v^{\prime}||}=||\nabla_{w^{\prime}}\mathcal{L}||\cdot cos(\nabla_{w^{\prime}}\mathcal{L},v)(9)

Given that ΔwS1=ΔwS2normΔsubscript𝑤𝑆1normΔsubscript𝑤𝑆2||\Delta w_{S1}||=||\Delta w_{S2}|| for S1𝑆1S1 and S2𝑆2S2, and wS1=wS2normsuperscriptsubscriptsuperscript𝑤𝑆1normsuperscriptsubscriptsuperscript𝑤𝑆2||\nabla_{w^{\prime}}^{S1}\mathcal{L}||=||\nabla_{w^{\prime}}^{S2}\mathcal{L}||. Therefore, with:

wS1|cos(wS1,v)|>wS2|cos(wS2,v)|normsuperscriptsubscriptsuperscript𝑤𝑆1𝑐𝑜𝑠superscriptsubscriptsuperscript𝑤𝑆1𝑣normsuperscriptsubscriptsuperscript𝑤𝑆2𝑐𝑜𝑠superscriptsubscriptsuperscript𝑤𝑆2𝑣||\nabla_{w^{\prime}}^{S1}\mathcal{L}||\cdot|cos(\nabla_{w^{\prime}}^{S1}\mathcal{L},v)|>||\nabla_{w^{\prime}}^{S2}\mathcal{L}||\cdot|cos(\nabla_{w^{\prime}}^{S2}\mathcal{L},v)|(10)

it can be inferred that |mS1|>|mS2|superscriptsubscriptsubscript𝑚𝑆1superscriptsubscriptsubscript𝑚𝑆2|\nabla_{m_{*}}^{S1}\mathcal{L}|>|\nabla_{m_{*}}^{S2}\mathcal{L}| which indicate that S1𝑆1S1 has larger magnitude updates over S2𝑆2S2 while having smaller directional alteration than that of S2𝑆2S2. Our conclusion generally holds in practice, as evidenced by Figure2 (c). Consequently, we have effectively shown how DoRAcan be utilized to adjust the learning pattern, diverging from that of LoRA and aligning more closely with the pattern of FT.

4.3 Reduction of Training Overhead

In Eq.(1), the gradients of Wsuperscript𝑊W^{\prime} and ΔWΔ𝑊\Delta W are the same. However, with DoRA, which redirects the low-rank adaptation towards the directional component, the gradient of the low-rank updates differs from that of Wsuperscript𝑊W^{\prime}, as illustrated in Eq.(6). This divergence necessitates extra memory during backpropagation. To address this, we suggest treating V+ΔVcsubscriptnorm𝑉Δ𝑉𝑐||V+\Delta V||_{c} in Eq.(5) as a constant, thereby detaching it from the gradient graph. This means that while V+ΔVcsubscriptnorm𝑉Δ𝑉𝑐||V+\Delta V||_{c} dynamically reflects the updates of ΔVΔ𝑉\Delta V, it won’t receive any gradient during backpropagation. With this modification, the gradient w.r.t m𝑚m remains unchanged, and Vsubscriptsuperscript𝑉\nabla_{V^{\prime}}\mathcal{L} is redefined as:

V=mCWwhereC=Vcsubscriptsuperscript𝑉𝑚𝐶subscriptsuperscript𝑊where𝐶subscriptnormsuperscript𝑉𝑐\nabla_{V^{\prime}}\mathcal{L}=\frac{m}{C}\nabla_{W^{\prime}}\mathcal{L}\text{ where }C=||V^{\prime}||_{c}(11)

This approach reduces the gradient graph memory consumption drastically without a noticeable difference in accuracy. We conduct an ablation study to evaluate the impact of the proposed modification on fine-tuning LLaMA-7B and VL-BART.The results indicate that the modification leads to a training memory reduction of approximately 24.4% in fine-tuning LLaMA and 12.4% in VL-BART. Furthermore, the accuracy of DoRAwith the modification remains unchanged for VL-BART and shows a negligible difference of only 0.2 compared to DoRAwithout the modification on LLaMA.For a comprehensive comparison of training memory usage and accuracy differences, please see Table 7 in the appendix. Consequently, all subsequent experiments with DoRAincorporate this adjustment.

5 Experiments

We conduct a variety of experiments to showcase the efficacy of DoRAon various tasks including language, image, and video domains. Firstly, we evaluate DoRAagainst several Parameter-Efficient Fine-Tuning (PEFT) methods by fine-tuning LLaMA-7B/13B on commonsense reasoning tasks. Subsequently, we extend from single modality to multimodality. We compare DoRAwith LoRA across multi-task image/video-text understanding tasks using VL-BART and visual instruction tuning with LLaVA-1.5-7B. Following this, we explore the compatibility of DoRAwith LoRA and VeRA(Kopiczko etal., 2024) for instruction-tuning on LLaMA-7B and LLaMA2-7B. Furthermore, we perform a series of ablation studies to illustrate that DoRAsurpasses LoRA in performance, irrespective of the number of fine-tuning training samples and rank variations. Lastly, We analyze the tuning granularity of DoRA, and show that DoRAcan achieve better accuracy than LoRA with fewer trainable parameters by selectively updating only the directional components of certain modules.

5.1 Commonsense Reasoning

ModelPEFT Method# Params (%)BoolQPIQASIQAHellaSwagWinoGrandeARC-eARC-cOBQAAvg.
ChatGPT--73.185.468.578.566.189.879.974.877.0
LLaMA-7BPrefix0.1164.376.873.942.172.172.954.060.664.6
Series0.9963.079.276.367.975.774.557.172.470.8
Parallel3.5467.976.478.869.878.973.757.375.272.2
LoRA0.8368.980.777.478.178.877.861.374.874.7
DoRAsuperscriptDoRA\text{DoRA}^{\dagger}(Ours)0.4370.082.679.783.280.680.665.477.677.5
DoRA(Ours)0.8468.582.979.684.880.881.465.881.078.1
LLaMA-13BPrefix0.0365.375.472.155.268.679.562.968.068.4
Series0.8071.88379.288.182.482.567.381.879.5
Parallel2.8972.584.979.892.184.784.271.282.481.4
LoRA0.6772.183.580.590.583.782.868.382.480.5
DoRAsuperscriptDoRA\text{DoRA}^{\dagger}(Ours)0.3572.585.379.990.182.982.769.783.680.8
DoRA(Ours)0.6872.484.981.592.484.284.269.682.881.5

We evaluate DoRAagainst LoRA and several baseline methods which include Prompt learning (Prefix)(Li & Liang, 2021), Series adapter (Series)(Houlsby etal., 2019), and Parallel adapter (Parallel)(He etal., 2021)on LLaMA-7B/13B(Touvron etal., 2023) for commonsense reasoning tasks. We also include ChatGPT’s accuracy obtained with gpt-3.5-turbo API using a zero-shot Chain of Thought (OpenAI, 2023; Wei etal., 2022).

The commonsense reasoning tasks comprise 8 sub-tasks, each with a predefined training and testing set. We follow the setting of (Hu etal., 2023) and amalgamate the training datasets from all 8 tasks to create the final training dataset and conduct evaluations on the individual testing dataset for each task.To ensure a fair comparison, we initially fine-tuned models with DoRAfollowing the LoRA configuration, maintaining the same rank while adjusting only the learning rate.The marginal increase of 0.01% in the number of trainable parameters for DoRAover LoRA, as detailed in Table1, arises from the inclusion of learnable magnitude components (parameter of size 1×k1𝑘{1\times k}). Then, we further halve the rank used in DoRAcompared to LoRA and denote this adjusted configuration as DoRAsuperscriptDoRA\text{DoRA}^{\dagger}. See Table 8 for details on the hyperparameters used.

Table1 demonstrates that DoRAconsistently surpasses all baseline methods across both LLaMA-7B and 13B models. Notably, in the LLaMA-7B model, where LoRA exceeds the performance of other baselines, DoRAfurther enhances accuracy by 3.4%, outstripping ChatGPT’s accuracy levels. Conversely, for LLaMA-13B, where LoRA’s effectiveness is inferior to the Parallel adapter, DoRAachieves superior accuracy over LoRA by 1% and comparable accuracy to the Parallel adapter, with only a quarter of the trainable parameters required by the Parallel adapter and without adding any extra inference overhead as the Parallel adapter. Furthermore, DoRAsuperscriptDoRA\text{DoRA}^{\dagger} exceeds LoRA’s performance on LLaMA-7B by 2.8% and on LLaMA-13B by 1%, even though it possesses only half the trainable parameters compared to LoRA. This outcome suggests that the integration of DoRAenhances the learning capability of LoRA, thereby reducing the need for a higher rank to surpass LoRA in terms of accuracy.

5.2 Image/Video-Text Understanding

Method# Params (%)VQAv2superscriptVQAv2\text{VQA}^{\text{v2}}GQANVLR2superscriptNVLR2\textbf{NVLR}^{2}COCO CapAvg.
FT10066.956.773.7112.077.3
LoRA5.9365.253.671.9115.376.5
DoRA(Ours)5.9665.854.773.1115.977.4

Method# Params (%)TVQAHow2QATVCYC2CAvg.
FT10076.373.945.715487.5
LoRA5.1775.572.944.6140.983.5
DoRA(Ours)5.1976.374.145.8145.485.4

Having shown that DoRAcan consistently achieve better accuracy on fine-tuning LLM, we would like to see if DoRAcan remain competitive on multi-modality fine-tuning tasks. We compare DoRAwith LoRA and full fine-tuning on VL-BART which comprises a vision encoder (CLIP-ResNet101(Radford etal., 2021)) and an encoder-decoder language model (BARTBasesubscriptBARTBase\text{BART}_{\text{Base}}(Lewis etal., 2020)) across four different image-text tasks: VQAv2superscriptVQAv2\text{VQA}^{\text{v2}}(Goyal etal., 2017) and GQA(Hudson & Manning, 2019) for visual question answering, NLVR2superscriptNLVR2\text{NLVR}^{2}(Suhr etal., 2019) for visual reasoning, and MSCOCO(Chen etal., 2015) for image captioning, and four different video-text tasks from the VALUE(Li etal., 2021) Benchmark: TVQA (Lei etal., 2018) and How2QA (Li etal., 2020) for video question answering, TVC (Lei etal., 2020) and YC2C (Zhou etal., 2018) for video captioning.

We follow the same framework as (Sung etal., 2022) and fine-tuned VL-BART within a multi-task framework for both image/video-text tasks. We adopt the same setup as that of LoRA outlined in (Sung etal., 2022) when applying DoRA. See Table 9 for the complete hyperparameters. The result of LoRA and FT for both image/video-text tasks is directly quoted from (Sung etal., 2022). We can see that DoRAuniformly surpasses LoRA in accuracy while maintaining a similar count of trainable parameters in both Table 2 and Table 3. In particular, DoRAexceeds LoRA’s performance by nearly 1% in image-text understanding tasks, reaching the accuracy level of FT. Moreover, DoRAachieves roughly 2% higher accuracy than LoRA in video-text understanding tasks.

5.3 Visual Instruction Tuning

Method# Params(%)Avg.
FT10066.5
LoRA4.6166.9
DoRA(Ours)4.6367.6

We further scale up the model size and compare DoRAto LoRA and FT on the visual instruction tuning tasks with LLaVA-1.5-7B (Liu etal., 2023a) which is composed of a language model, Vicuna-1.5-7B(Peng etal., 2023), and a vision encoder, CLIP ViT-L/336px(Radford etal., 2021). The training datasets contain several datasets from VQA (Goyal etal., 2017; Hudson & Manning, 2019; Marino etal., 2019; Schwenk etal., 2022), OCR (Mishra etal., 2019; Sidorov etal., 2020), region-level VQA (Kazemzadeh etal., 2014; Krishna etal., 2017; Mao etal., 2016), visual conversation (Liu etal., 2023a), and language conversation data. We follow the setting of (Liu etal., 2023a) to filter the training data and construct the tunning prompt format. For a fair comparison, DoRAfollows the same configuration as the LoRA configuration provided by (Liu etal., 2023a). The fine-tuned models are then evaluated on seven vision-language benchmarks: VQAv2superscriptVQAv2\text{VQA}^{\text{v2}} (Goyal etal., 2017), GQA (Hudson & Manning, 2019), VisWiz (Gurari etal., 2018) SQA (Lu etal., 2022), VQATsuperscriptVQAT\text{VQA}^{\text{T}} (Singh etal., 2019), POPE (Li etal., 2023), and MMBench (Liu etal., 2023c).

From Table 4, we can observe that the average accuracy of LoRA already surpasses FT, which could imply that FT might be experiencing issues with overfitting.Given that DoRAis designed to enhance LoRA’s performance to more closely resemble that of FT, in scenarios where FT is inferior to LoRA, DoRA’s improvement over LoRA might not be as pronounced as observed in other experiments where FT usually outperforms LoRA. Nonetheless, DoRAstill demonstrates superior performance over both LoRA and FT, with an average improvement of 0.7% over LoRA and 1.1% over FT. See Table 10 for the hyperparameters setting and Table 12 for the score of each evaluation benchmark.

5.4 Compatibility of DoRAwith other LoRA variants

ModelPEFT Method# Params (%)Score
LLaMA-7BLoRA2.315.1
DoRA(Ours)2.335.5
VeRA0.024.3
DVoRA(Ours)0.045.0
LLaMA2-7BLoRA2.315.7
DoRA(Ours)2.336.0
VeRA0.025.5
DVoRA(Ours)0.046.0

Recall from Equation.(1) that ΔWΔ𝑊\Delta W can be adapted by different LoRA variants. With DoRA, the concept of incremental directional update ΔVΔ𝑉\Delta V introduced in Equation.(5) can likewise be replaced with alternative LoRA variants. In this section, we select VeRA (Kopiczko etal., 2024) as a case study to explore DoRA’s compatibility with other LoRA variants. VeRA suggests freezing a unique pair of random low-rank matrices to be shared across all layers, employing only minimal layer-specific trainable scaling vectors to capture each layer’s incremental updates. This approach allows VeRA to reduce trainable parameters significantly by 10x compared to LoRA, with only a minimal impact on accuracy.We apply VeRA for the directional update in DoRAand name such combination DVoRA. We assess the effectiveness of both DVoRAand DoRAcompared to VeRA and LoRA across LLaMA-7B and LLaMA2-7B, focusing on instruction tuning with the 10K subset of cleaned Alpaca dataset (Taori etal., 2023). We utilize the official implementation of VeRA to obtain the results of VeRA and LoRA and fine-tune the model with DVoRAand DoRAusing the identical training settings as VeRA and LoRA (see Table11 in the appendix for more details). The performance of the fine-tuned models is then evaluated on the MT-Bench benchmark (Zheng etal., 2023) by generating model responses to a pre-defined set of 80 multi-turn questions. These responses are then evaluated by GPT-4, which reviews each answer and assigns a numerical score out of 10.

Table 5 presents the average scores for DVoRA, DoRA, VeRA, and LoRA, demonstrating that our proposed method exhibits consistent improvements over VeRA and LoRA for both LLaMA-7B and LLaMA2-7B.This effectively showcases the compatibility of DoRAwith VeRA. In particular, DVoRAmerges the advantageous qualities of DoRAand VeRA, attaining scores that are on par with or even surpass those of LoRA, yet with significantly fewer parameters. For example, DVoRAoutperforms VeRA by 0.7/0.5 points and achieves the same level of accuracy as LoRA on LLaMA-7B and DoRAon LLaMA2-7B, respectively. Additionally, we present a selection of questions chosen from MT-Bench, accompanied by the responses from LLaMA2-7B fine-tuned using DVoRAand VeRA in the appendix (Table13 and 14) where we can observe that the answers given by DVoRAtend to be more precise and structural.


DoRA: Weight-Decomposed Low-Rank Adaptation (3)

Next, to further assess DoRA’s ability to remain competitive under varying amounts of training data, considering that in practical situations, access to extensive fine-tuning datasets is frequently limited. We compare DoRAto LoRA and DVoRAto VeRA for fine-tuning LLaMA2-7B/LLaMA-7B with a range of instruction-tuning sample sizes, specifically 1000, 4000, 7000, 10000, with 10000 being the setting of (Kopiczko etal., 2024). We visualize the average performance of each method on LLaMA2-7B in Figure3, and on LLaMA-7B in Figure6 in the appendix. The result shows that DoRAand DVoRAconsistently outperform LoRA and VeRA across all training sample sizes. For instance, with 7000 training samples, DoRAand DVoRAsurpass LoRA and VeRA by margins of 0.3 and 0.33, respectively. Even when the sample size is reduced to 1000, DoRAand DVoRAmaintain their lead with advantages of 0.29 and 0.22 over LoRA and VeRA, respectively. This demonstrates that our methods persistently enhance performance over LoRA and VeRA, regardless of the training sample volume.

5.5 Robustness of DoRAtowards different rank settings


DoRA: Weight-Decomposed Low-Rank Adaptation (4)

This section explores the impact of various rank configurations on DoRAand LoRA by adjusting r𝑟r within the set {4, 8, 16, 32, 64} and assessing the performance of the fine-tuned LLaMA-7B on commonsense reasoning tasks as outlined in Sec5.1. The average accuracies of LoRA and DoRAacross different ranks are depicted in Figure4, with detailed numbers presented in Table 15. From Figure4, we can observe that DoRAconsistently surpasses LoRA across all rank configurations. Notably, the performance gap widens for ranks below 8, where LoRA’s average accuracies drop to 40.74% for r=8𝑟8r=8 and 39.49% for r=4𝑟4r=4. In contrast, DoRAretains a notable accuracy of 77.96% for r=8𝑟8r=8 and 61.89% for r=4𝑟4r=4, demonstrating its resilience and consistently superior performance over LoRA regardless of the rank setting.

5.6 Tuning Granularity Analysis

ModelPEFT Method# Params (%)mVAvg.
LLaMA-7BLoRA0.83--74.7
DoRA(Ours)0.84QKVUDQKVUD78.1
DoRA(Ours)0.39QKVOGUDQKV77.5
LLaMA-13BLoRA0.67--80.5
DoRA(Ours)0.68QKVUDQKVUD81.5
DoRA(Ours)0.31QKVOGUDQKV81.3

The visualization in Figure2 indicates that significant changes in magnitude often result in relatively smaller directional changes. Given this observation and the fact that directional updates account for most of the trainable parameters, it prompts an investigation into whether it is possible to decrease the number of trainable parameters by updating only the magnitude components of specific modules while continuing to update both the magnitude and directional components for the remaining linear modules.

Our findings indicate that, in contrast to the original configuration suggested for LoRA in (Hu etal., 2023), which requires updates to both the Multi-head Attention and MLP layers for optimal performance, DoRAcan already achieve superior accuracy by updating only the directional and magnitude components of the multi-head layers and the magnitude of the MLP layers. Specifically, as shown in Table 6, by updating the directional and magnitude components of the QKV modules and only the magnitude of the rest of the layers, DoRAsurpasses LoRA by 2.8% on LLaMA-7B and 0.8% on LLaMA-13B, while utilizing only less than half of the trainable parameters compared to LoRA.

6 Conclusion

In this work, we first conduct a novel weight decomposition analysis to reveal the distinct learning patterns between LoRA and FT. Building on these insights, we introduce DoRA, a fine-tuning method that is compatible with LoRA and its variants and exhibits a closer resemblance to FT’s learning behavior. DoRAconsistently outperforms LoRA across various fine-tuning tasks and model architectures. Specifically, DoRAimproves upon LoRA in commonsense reasoning and visual instruction tuning tasks. Furthermore, DoRAalso shows compatibility with VeRA on the Alpaca instruction tuning task. Moreover, DoRAcan be considered as a costless alternative to LoRA, as its decomposed magnitude and direction components can be merged back into the pre-trained weight after the training, ensuring that there is no extra inference overhead. For future work, we wish to explore the generalizability of DoRAin domains beyond language and vision, particularly in the field of audio. Additionally, it would be intriguing to investigate the potential of DoRAin various other applications, such as Stable Diffusion text-to-image fine-tuning.

7 Impact Statements

This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here.

References

  • Chen etal. (2015)Chen, X., Fang, H., Lin, T.-Y., Vedantam, R., Gupta, S., Dollár, P., and Zitnick, C.L.Microsoft coco captions: Data collection and evaluation server.arXiv preprint arXiv:1504.00325, 2015.
  • Goyal etal. (2017)Goyal, Y., Khot, T., Summers-Stay, D., Batra, D., and Parikh, D.Making the v in vqa matter: Elevating the role of image understanding in visual question answering.In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6904–6913, 2017.
  • Gurari etal. (2018)Gurari, D., Li, Q., Stangl, A.J., Guo, A., Lin, C., Grauman, K., Luo, J., and Bigham, J.P.Vizwiz grand challenge: Answering visual questions from blind people.In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3608–3617, 2018.
  • He etal. (2021)He, J., Zhou, C., Ma, X., Berg-Kirkpatrick, T., and Neubig, G.Towards a unified view of parameter-efficient transfer learning.In International Conference on Learning Representations, 2021.
  • He etal. (2015)He, K., Zhang, X., Ren, S., and Sun, J.Delving deep into rectifiers: Surpassing human-level performance on imagenet classification.In Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034, 2015.
  • Houlsby etal. (2019)Houlsby, N., Giurgiu, A., Jastrzebski, S., Morrone, B., DeLaroussilhe, Q., Gesmundo, A., Attariyan, M., and Gelly, S.Parameter-efficient transfer learning for nlp.In International Conference on Machine Learning, pp. 2790–2799, 2019.
  • Hu etal. (2022)Hu, E.J., yelong shen, Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., and Chen, W.LoRA: Low-rank adaptation of large language models.In International Conference on Learning Representations, 2022.
  • Hu etal. (2023)Hu, Z., Wang, L., Lan, Y., Xu, W., Lim, E.-P., Bing, L., Xu, X., Poria, S., and Lee, R.LLM-adapters: An adapter family for parameter-efficient fine-tuning of large language models.In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023.
  • Hudson & Manning (2019)Hudson, D.A. and Manning, C.D.Gqa: A new dataset for real-world visual reasoning and compositional question answering.In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 6700–6709, 2019.
  • Hyeon-Woo etal. (2022)Hyeon-Woo, N., Ye-Bin, M., and Oh, T.-H.Fedpara: Low-rank hadamard product for communication-efficient federated learning.In International Conference on Learning Representations, 2022.
  • KarimiMahabadi etal. (2021)KarimiMahabadi, R., Ruder, S., Dehghani, M., and Henderson, J.Parameter-efficient multi-task fine-tuning for transformers via shared hypernetworks.In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 565–576, 2021.
  • Kazemzadeh etal. (2014)Kazemzadeh, S., Ordonez, V., Matten, M., and Berg, T.ReferItGame: Referring to objects in photographs of natural scenes.In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 787–798, 2014.
  • Kopiczko etal. (2024)Kopiczko, D.J., Blankevoort, T., and Asano, Y.M.Vera: Vector-based random matrix adaptation.In International Conference on Learning Representations, 2024.
  • Krishna etal. (2017)Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., Chen, S., Kalantidis, Y., Li, L.-J., Shamma, D.A., etal.Visual genome: Connecting language and vision using crowdsourced dense image annotations.International journal of computer vision, pp. 32–73, 2017.
  • Lei etal. (2018)Lei, J., Yu, L., Bansal, M., and Berg, T.TVQA: Localized, compositional video question answering.In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 1369–1379, 2018.
  • Lei etal. (2020)Lei, J., Yu, L., Berg, T.L., and Bansal, M.Tvr: A large-scale dataset for video-subtitle moment retrieval.In European Conference on Computer Vision, pp. 447–463, 2020.
  • Lester etal. (2021)Lester, B., Al-Rfou, R., and Constant, N.The power of scale for parameter-efficient prompt tuning.In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059, 2021.
  • Lewis etal. (2020)Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., and Zettlemoyer, L.BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension.In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7871–7880, 2020.
  • Li etal. (2022)Li, J., Li, D., Xiong, C., and Hoi, S.Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation.In International Conference on Machine Learning, pp. 12888–12900, 2022.
  • Li etal. (2020)Li, L., Chen, Y.-C., Cheng, Y., Gan, Z., Yu, L., and Liu, J.HERO: Hierarchical encoder for Video+Language omni-representation pre-training.In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 2046–2065, 2020.
  • Li etal. (2021)Li, L., Lei, J., Gan, Z., Yu, L., Chen, Y.-C., Pillai, R., Cheng, Y., Zhou, L., Wang, X.E., Wang, W.Y., etal.Value: A multi-task benchmark for video-and-language understanding evaluation.In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1), 2021.
  • Li & Liang (2021)Li, X.L. and Liang, P.Prefix-tuning: Optimizing continuous prompts for generation.In Zong, C., Xia, F., Li, W., and Navigli, R. (eds.), Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597, 2021.
  • Li etal. (2023)Li, Y., Du, Y., Zhou, K., Wang, J., Zhao, X., and Wen, J.-R.Evaluating object hallucination in large vision-language models.In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 292–305, 2023.
  • Liu etal. (2023a)Liu, H., Li, C., Wu, Q., and Lee, Y.J.Visual instruction tuning.In Thirty-seventh Conference on Neural Information Processing Systems, 2023a.
  • Liu etal. (2023b)Liu, W., Qiu, Z., Feng, Y., Xiu, Y., Xue, Y., Yu, L., Feng, H., Liu, Z., Heo, J., Peng, S., etal.Parameter-efficient orthogonal finetuning via butterfly factorization.arXiv preprint arXiv:2311.06243, 2023b.
  • Liu etal. (2023c)Liu, Y., Duan, H., Zhang, Y., Li, B., Zhang, S., Zhao, W., Yuan, Y., Wang, J., He, C., Liu, Z., etal.Mmbench: Is your multi-modal model an all-around player?arXiv preprint arXiv:2307.06281, 2023c.
  • Lu etal. (2022)Lu, P., Mishra, S., Xia, T., Qiu, L., Chang, K.-W., Zhu, S.-C., Tafjord, O., Clark, P., and Kalyan, A.Learn to explain: Multimodal reasoning via thought chains for science question answering.Advances in Neural Information Processing Systems, pp. 2507–2521, 2022.
  • mahabadi etal. (2021)mahabadi, R.K., Henderson, J., and Ruder, S.Compacter: Efficient low-rank hypercomplex adapter layers.In Advances in Neural Information Processing Systems, 2021.
  • Mao etal. (2016)Mao, J., Huang, J., Toshev, A., Camburu, O., Yuille, A.L., and Murphy, K.Generation and comprehension of unambiguous object descriptions.In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 11–20, 2016.
  • Marino etal. (2019)Marino, K., Rastegari, M., Farhadi, A., and Mottaghi, R.Ok-vqa: A visual question answering benchmark requiring external knowledge.In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pp. 3195–3204, 2019.
  • Mishra etal. (2019)Mishra, A., Shekhar, S., Singh, A.K., and Chakraborty, A.Ocr-vqa: Visual question answering by reading text in images.In 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 947–952, 2019.
  • OpenAI (2023)OpenAI.Welcome to the openai platform, 2023.URL https://platform.openai.com/docs/introduction.
  • Peng etal. (2023)Peng, B., Li, C., He, P., Galley, M., and Gao, J.Instruction tuning with gpt-4.arXiv preprint arXiv:2304.03277, 2023.
  • Ponti etal. (2022)Ponti, E.M., Sordoni, A., Bengio, Y., and Reddy, S.Combining modular skills in multitask learning.arXiv preprint arXiv:2202.13914, 2022.
  • Qin etal. (2023)Qin, C., Zhang, A., Zhang, Z., Chen, J., Yasunaga, M., and Yang, D.Is chatgpt a general-purpose natural language processing task solver?In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 1339–1384, 2023.
  • Qiu etal. (2023)Qiu, Z., Liu, W., Feng, H., Xue, Y., Feng, Y., Liu, Z., Zhang, D., Weller, A., and Schölkopf, B.Controlling text-to-image diffusion by orthogonal finetuning.In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  • Radford etal. (2021)Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., etal.Learning transferable visual models from natural language supervision.In International conference on machine learning, pp. 8748–8763, 2021.
  • Razdaibiedina etal. (2023)Razdaibiedina, A., Mao, Y., Khabsa, M., Lewis, M., Hou, R., Ba, J., and Almahairi, A.Residual prompt tuning: improving prompt tuning with residual reparameterization.In Findings of the Association for Computational Linguistics: ACL 2023, pp. 6740–6757, 2023.
  • Renduchintala etal. (2023)Renduchintala, A., Konuk, T., and Kuchaiev, O.Tied-lora: Enhacing parameter efficiency of lora with weight tying.arXiv preprint arXiv:2311.09578, 2023.
  • Salimans & Kingma (2016)Salimans, T. and Kingma, D.P.Weight normalization: a simple reparameterization to accelerate training of deep neural networks.In Proceedings of the 30th International Conference on Neural Information Processing Systems, pp. 901–909, 2016.
  • Schwenk etal. (2022)Schwenk, D., Khandelwal, A., Clark, C., Marino, K., and Mottaghi, R.A-okvqa: A benchmark for visual question answering using world knowledge.In European Conference on Computer Vision, pp. 146–162, 2022.
  • Sidorov etal. (2020)Sidorov, O., Hu, R., Rohrbach, M., and Singh, A.Textcaps: a dataset for image captioning with reading comprehension.In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16, pp. 742–758, 2020.
  • Singh etal. (2019)Singh, A., Natarajan, V., Shah, M., Jiang, Y., Chen, X., Batra, D., Parikh, D., and Rohrbach, M.Towards vqa models that can read.In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8317–8326, 2019.
  • Suhr etal. (2019)Suhr, A., Zhou, S., Zhang, A., Zhang, I., Bai, H., and Artzi, Y.A corpus for reasoning about natural language grounded in photographs.In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 6418–6428, 2019.
  • Sung etal. (2022)Sung, Y.-L., Cho, J., and Bansal, M.Vl-adapter: Parameter-efficient transfer learning for vision-and-language tasks.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5227–5237, 2022.
  • Taori etal. (2023)Taori, R., Gulrajani, I., Zhang, T., Dubois, Y., Li, X., Guestrin, C., Liang, P., and Hashimoto, T.B.Stanford alpaca: An instruction-following llama model, 2023.URL https://github.com/tatsu-lab/stanford_alpaca.
  • Touvron etal. (2023)Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., etal.Llama: Open and efficient foundation language models.arXiv preprint arXiv:2302.13971, 2023.
  • Wang etal. (2023)Wang, Y., Wu, J., Dabral, T., Zhang, J., Brown, G., Lu, C.-T., Liu, F., Liang, Y., Pang, B., Bendersky, M., etal.Non-intrusive adaptation: Input-centric parameter-efficient fine-tuning for versatile multimodal modeling.arXiv preprint arXiv:2310.12100, 2023.
  • Wei etal. (2022)Wei, J., Wang, X., Schuurmans, D., Bosma, M., brian ichter, Xia, F., Chi, E.H., Le, Q.V., and Zhou, D.Chain of thought prompting elicits reasoning in large language models.In Advances in Neural Information Processing Systems, 2022.
  • Yeh etal. (2023)Yeh, S.-Y., Hsieh, Y.-G., Gao, Z., Yang, B.B., Oh, G., and Gong, Y.Navigating text-to-image customization: From lycoris fine-tuning to model evaluation.arXiv preprint arXiv:2309.14859, 2023.
  • Zhang etal. (2023)Zhang, Q., Chen, M., Bukharin, A., He, P., Cheng, Y., Chen, W., and Zhao, T.Adaptive budget allocation for parameter-efficient fine-tuning.In The Eleventh International Conference on Learning Representations, 2023.
  • Zheng etal. (2023)Zheng, L., Chiang, W.-L., Sheng, Y., Zhuang, S., Wu, Z., Zhuang, Y., Lin, Z., Li, Z., Li, D., Xing, E., Zhang, H., Gonzalez, J.E., and Stoica, I.Judging LLM-as-a-judge with MT-bench and chatbot arena.In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2023.
  • Zhou etal. (2018)Zhou, L., Xu, C., and Corso, J.Towards automatic learning of procedures from web instructional videos.In Proceedings of the AAAI Conference on Artificial Intelligence, 2018.

Appendix A Appendix

A.1 Weight decomposition analysis on the value weight matrix

In this section, we illustrate the changes in magnitude and direction within the value weight matrix for FT, LoRA, and DoRAacross different training steps and layers, as shown in Figure 5. This reveals patterns similar to those seen in the query weight matrix depicted in Figure 2, indicating that DoRAis capable of displaying learning behaviors that closely mirror those of FT across various modules.


DoRA: Weight-Decomposed Low-Rank Adaptation (5)

A.2 Ablation study for the modification to reduce DoRAtraining cost

Table 7 presents the GPU cost and the average accuracy of DoRAwith and without the proposed modification for commonsense reasoning tasks and image-text understanding tasks. The results indicate that the modification leads to a training memory reduction of approximately 24.4% in fine-tuning LLaMA and 12.4% in VL-BART. Furthermore, the accuracy of DoRAwith the modification remains unchanged for VL-BART and shows a negligible difference of only 0.2 compared to DoRAwithout the modification on LLaMA.

ModelPEFT MethodAccumulation stepsBatch SizeGPU Memory Cost (GB)# Params (%)Avg.
LLaMA-7BDoRAw/o modification41637.30.8478.3
DoRA28.2 (-24.4%)0.8478.1
VL-BARTDoRAw/o modification-30023.45.9677.3
DoRA20.5 (-12.4%)5.9677.4

A.3 Hyperparameters

Hyperparameters (DoRA)LLaMA-7BLLaMA-13B
Rank r16321632
α𝛼\alpha32643264
Dropout0.05
OptimizerAdamW
LR2e-42e-43e-42e-4
LR SchedulerLinear
Batch size16
Warmup Steps100
Epochs3
WhereQ,K,V,Up,Down
Hyperparameters (DoRA)image-textvideo-text
Rank r128
α𝛼\alpha128
Dropout0.0
OptimizerAdamW
LR1e-33e-4
LR SchedulerLinear
Batch size30040
Warmup ratio0.1
Epochs207
WhereQ,K
HyperparametersDoRALoRA
Rank r128
α𝛼\alpha256
Dropout0.05
OptimizerAdamW
LR2e-4
LR SchedulerCosine decay
Batch size16
Warmup ratio0.03
Epochs1
WhereQ,K,V,O,Up,Down,Gate
Hyperparameters (DoRA)LLaMA-7BLLaMA2-7B
Rank r64
Dropout0.0
OptimizerAdamW
LR4e-4
LR SchedulerCosine
Batch size4
Accumulation Steps4
Warmup ratio0.1
Epochs1
WhereQ,K,V,O,Up,Down,Gate
Hyperparameters (DVoRA)LLaMA-7BLLaMA2-7B
Rank r1024
Dropout0.0
OptimizerAdamW
LR4e-3
LR SchedulerCosine
Batch size4
Accumulation Steps4
Warmup ratio0.1
Epochs1
WhereQ,K,V,O,Up,Down,Gate

A.4 Visual instruction tuning evaluation result

Table 12 presents the evaluation result of LLaVA-1.5-7B fine-tuned using DoRA, LoRA, and FT with visual instruction tuning data. The evaluation is conducted on an array of seven different vision-language benchmarks. From the table, we can see that DoRAconsistently outmatch FT, and can achieve an average accuracy better than LoRA.

Method# Params (%)VQAv2superscriptVQAv2\textbf{VQA}^{\text{v2}}GQAVisWizSQAVQATsuperscriptVQAT\textbf{VQA}^{\text{T}}POPEMMBenchAvg.
FT10078.561.950.066.858.285.964.366.5
LoRA4.6179.162.947.868.458.286.466.166.9
DoRA(Ours)4.6378.662.952.269.95787.266.167.6

A.5 Instruction-tuning evaluation examples

We sample two questions from the MT-Bench question set and present the respective responses from DVoRAand VeRA, including the scores and comments provided by GPT-4. From these examples, we can observe that responses from DVoRAare generally more accurate and well-organized compared to those from VeRA. For instance, in response to Question 24, DVoRAleverages a well-known superhero story that is familiar to children, whereas VeRA’s response begins with an uncertain tone, potentially making it less engaging than DVoRA’s reply.

Question 24As a superhero, how would you explain your origin story to a curious child?
VeRAWell, I’m not really sure where I come from. All I know is that one day, I woke up with special powers and abilities that allowed me to help people. I decided to use my powers to fight crime and protect the innocent, and that’s how I became a superhero
DVoRAAs a superhero, I have the ability to use my super powers to protect people and fight against evil. My origin story begins when I was just a regular kid like you, but one day, I was bitten by a radioactive spider, and it gave me incredible powers. I could climb walls, shoot webs, and see in the dark. I decided to use my powers to help people, and that’s how I became your friendly neighborhood Spider-Man.
GPT-4 EvaluationScore: 5 7. Assistant 1’s response starts with an uncertain tone, which might not be very engaging for a child. The explanation is quite vague and doesn’t offer a clear or imaginative origin story that could captivate a child’s interest. It does, however, maintain the theme of using powers for good, which is a positive message. Assistant 2’s response is more detailed and uses a familiar superhero origin story (Spider-Man’s) that many children can relate to and understand. It’s engaging and provides a clear narrative of how an ordinary person can become a superhero. However, it directly references Spider-Man, which is not an original origin story and could be misleading if the assistant is supposed to be a unique superhero. Despite this, the response is more likely to resonate with a child due to its clarity and reliability. Both responses could be improved by providing an original and imaginative origin story that doesn’t rely on existing superhero narratives. They should also ensure that the story is simple enough for a child to understand while being engaging and inspiring.
Question 33Why might someone choose to use a paper map or ask for directions instead of relying on a GPS device or smartphone app?
VeRAThere are several reasons why someone might choose to use a paper map or ask for directions instead of relying on a GPS device or smartphone app. Some people may prefer the physical act of navigating with a paper map, as it can be more tactile and interactive. Others may prefer to have a hard copy of the map in case the GPS or app fails or the battery dies. Additionally, some people may not have access to a GPS device or smartphone, or may not want to rely on technology for navigation. Finally, some people may simply prefer the security and peace of mind that comes with knowing exactly where they are and how to get to their destination
DVoRAThere are several reasons why someone might choose to use a paper map or ask for directions instead of relying on a GPS device or smartphone app. Here are some of them: 1. Navigation accuracy: GPS devices and smartphone apps rely on satellite signals to pinpoint your location, but this technology is not foolproof. GPS signals can be blocked by tall buildings, tunnels, or other obstacles, resulting in inaccurate or missing information. Paper maps, on the other hand, are based on accurate, up-to-date information that is not subject to these limitations. 2. Cost: GPS devices and smartphone apps can be expensive to purchase and maintain. Paper maps are free to use and do not require an ongoing subscription or data plan. 3. Personal preference: Some people prefer the tactile experience of using a paper map, which can be easier to read and navigate than a screen. Others enjoy the challenge of navigating without technology, which can help them learn more about the area they are visiting. 4. Battery life: GPS devices and smartphone apps can drain batteries quickly, especially when used for navigation. Paper maps do not require power, so they can be used for extended periods without recharging. 5. Offline use: GPS devices and smartphone apps rely on an internet connection to function properly. If there is no signal, the device will not be able to provide accurate information. Paper maps, on the other hand, can be used offline, making them ideal for use in remote or unpopulated areas. In summary, there are several reasons why someone might choose to use a paper map or ask for directions instead of relying on a GPS device or smartphone app. These include navigation accuracy, cost, personal preference, battery life, and offline use.
GPT-4 EvaluationScore: 6 8. Assistant 1’s response is somewhat brief and lacks specific details that could be helpful to the user. It mentions the tactile nature of paper maps, the possibility of technology failure, lack of access to technology, and a general preference for security and peace of mind. However, it does not delve into the practical considerations such as GPS signal limitations, cost, battery life, or offline use, which are important factors in deciding between paper maps and digital navigation tools. Assistant 2’s response is more comprehensive and provides a structured list of reasons, including navigation accuracy, cost, personal preference, battery life, and offline use. Each point is explained clearly, giving the user a better understanding of the potential advantages of paper maps over GPS devices or smartphone apps. The response is relevant, accurate, and detailed, making it more helpful to the user. Both assistants provided relevant and accurate information, but Assistant 2’s answer is more detailed and structured, which is why it receives a higher score. The scores reflect the level of detail and the usefulness of the information provided to the user, with Assistant 2 offering a more complete answer to the question

A.6 Robustness towards different numbers of instruction tunning examples

Figure 6 depicts the score of DoRA, LoRA, DVoRA, and VeRA for fine-tuning LLaMA-7B with varying instruction-tuning sample sizes, including 1000, 4000, 7000, and 10000. From the figure, we can observe that similar to that of the result on LLaMA2-7B, both DoRAand DVoRAconsistently outperform LoRA and VeRA across the entire range of training sample sizes.


DoRA: Weight-Decomposed Low-Rank Adaptation (6)

A.7 Robustness of DoRAtowards different rank

Table 15 shows a comparison of the average accuracy between LoRA and DoRAmethod across various rank settings for commonsense reasoning tasks. DoRAconsistently outperforms LoRA at all rank settings, with the performance gap widening as the rank decreases. This suggests that our method effectively enhances the learning capacity of LoRA, enabling it to achieve better accuracy with fewer trainable parameters.

PEFT Methodrank r# Params (%)BoolQPIQASIQAHellaSwagWinoGrandeARC-eARC-cOBQAAvg.
LoRA40.102.346.118.319.755.265.451.95739.5
80.2131.357.044.011.843.345.739.253.840.7
160.4269.977.875.172.155.877.162.278.070.9
320.8368.980.777.478.178.877.861.374.874.7
641.6466.779.175.717.678.873.359.675.265.8
DoRA(Ours)40.1151.342.277.825.478.878.762.578.661.9
80.2269.981.879.785.280.181.565.779.877.9
160.4370.082.679.783.280.680.665.477.677.5
320.8468.582.979.684.880.881.465.881.078.1
641.6569.981.479.140.780.080.965.579.472.1

DoRA: Weight-Decomposed Low-Rank Adaptation (2024)

FAQs

What is DoRA vs LoRA? ›

DoRA is more general, it's basically normalizing and scaling the LoRA matrices to get much better performance. According to the paper, it even works great for low ranks, which also effectively makes it even more parameter-efficient than OG LoRA.

What is rank stabilized LoRA? ›

With rank-stabilized LoRA (rsLoRA), the training loss decreased further compared to normal LoRA, even when increasing the rank from 64 to 1,024. This indicates that rsLoRA effectively maintains gradient stability at higher ranks.

What is LoRa low-rank adaptation? ›

Low-rank adaptation (LoRA) is a machine learning technique that modifies a pretrained model (for example, an LLM or vision transformer) to better suit a specific, often smaller, dataset by adjusting only a small, low-rank subset of the model's parameters.

Why is LoRa so good? ›

LoRa is ideal for applications that transmit small chunks of data with low bit rates. Data can be transmitted at a longer range compared to technologies like WiFi, Bluetooth or ZigBee. These features make LoRa well suited for sensors and actuators that operate in low power mode.

What is meant by LoRa? ›

LoRa (short for long range) is a spread spectrum modulation technique derived from chirp spread spectrum (CSS) technology.

What does Dora mean in software? ›

DevOps Research and Assessment (DORA) provides a standard set of DevOps metrics used for evaluating process performance and maturity. These metrics provide information about how quickly DevOps can respond to changes, the average time to deploy code, the frequency of iterations, and insight into failures.

What is the purpose of the LoRa? ›

A long-range radio (LoRa) sensor is a method of transmitting data for the Internet of Things (IoT) and machine-to-machine (M2M) devices through the cloud. LoRa sensors have a wide range of compatibilities, allowing you to connect them with various machines, devices and even animals and people.

Why use LoRa instead of Wi-Fi? ›

If you're looking to connect devices across long distances with minimal power consumption, then LoRaWAN is for you. LoRaWAN can connect battery-powered devices even when spread out and with reduced power consumption compared to traditional WiFi.

Top Articles
Latest Posts
Article information

Author: Arielle Torp

Last Updated:

Views: 6117

Rating: 4 / 5 (61 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Arielle Torp

Birthday: 1997-09-20

Address: 87313 Erdman Vista, North Dustinborough, WA 37563

Phone: +97216742823598

Job: Central Technology Officer

Hobby: Taekwondo, Macrame, Foreign language learning, Kite flying, Cooking, Skiing, Computer programming

Introduction: My name is Arielle Torp, I am a comfortable, kind, zealous, lovely, jolly, colorful, adventurous person who loves writing and wants to share my knowledge and understanding with you.