An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
- Updated Jul 30, 2024
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
Awesome List of Attention Modules and Plug&Play Modules in Computer Vision
Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"
PyTorch Dual-Attention LSTM-Autoencoder For Multivariate Time Series
🦖Pytorch implementation of popular Attention Mechanisms, Vision Transformers, MLP-Like models and CNNs.🔥🔥🔥
Official PyTorch Implementation for "Rotate to Attend: Convolutional Triplet Attention Module." [WACV 2021]
Neat (Neural Attention) Vision, is a visualization tool for the attention mechanisms of deep-learning models for Natural Language Processing (NLP) tasks. (framework-agnostic)
Sparse and structured neural attention mechanisms
Learning YOLOv3 from scratch 从零开始学习YOLOv3代码
Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling
PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"
Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper
This repository contain various types of attention mechanism like Bahdanau , Soft attention , Additive Attention , Hierarchical Attention etc in Pytorch, Tensorflow, Keras
Experimental playground for benchmarking language model (LM) architectures, layers, and tricks on smaller datasets. Designed for flexible experimentation and exploration.
Pytorch Implementation of the sparse attention from the paper: "Generating Long Sequences with Sparse Transformers"
Multi heads attention for image classification
In this repository, one can find the code for my master's thesis project. The main goal of the project was to study and improve attention mechanisms for trajectory prediction of moving agents.
Hierarchical probabilistic 3D U-Net, with attention mechanisms (—𝘈𝘵𝘵𝘦𝘯𝘵𝘪𝘰𝘯 𝘜-𝘕𝘦𝘵, 𝘚𝘌𝘙𝘦𝘴𝘕𝘦𝘵) and a nested decoder structure with deep supervision (—𝘜𝘕𝘦𝘵++). Built in TensorFlow 2.5. Configured for voxel-level clinically significant prostate cancer detection in multi-channel 3D bpMRI scans.
Sequence to Sequence and attention from scratch using Tensorflow
VAAS is an inference-first, research-driven library for image integrity analysis. It integrates Vision Transformer Attention Mechanisms with patch-level self-consistency analysis to enable fine-grained localization and detection of visual inconsistencies across diverse image analysis tasks.
Add a description, image, and links to the attention-mechanisms topic page so that developers can more easily learn about it.
To associate your repository with the attention-mechanisms topic, visit your repo's landing page and select "manage topics."