-
National Central University
- Taiwan
Highlights
- Pro
Stars
- All languages
- Assembly
- AutoHotkey
- Batchfile
- C
- C#
- C++
- CMake
- CSS
- Clojure
- CoffeeScript
- Cuda
- Dart
- Dockerfile
- FreeMarker
- Gherkin
- Go
- HTML
- Haskell
- Java
- JavaScript
- Julia
- Jupyter Notebook
- Kotlin
- Lua
- MATLAB
- MDX
- MLIR
- Markdown
- Objective-C
- Objective-C++
- OpenSCAD
- PHP
- PostScript
- PowerShell
- Python
- Roff
- Ruby
- Rust
- SCSS
- Scala
- Shell
- Svelte
- Swift
- TeX
- TypeScript
- VBScript
- Vala
- Vim Script
- Visual Basic
- Vue
Codes for "Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View"
A blazing fast, information dense media player built with Next.js.
Kolmogorov-Arnold Transformer: A PyTorch Implementation with CUDA kernel
Mask Attention Networks: Rethinking and Strengthen Transformer in NAACL2021
An automatic compilation script for 7-Zip, which replaces the default file association icons and file manager skins with more attractive ones, and adds associations for Jar and War files.
小狼毫输入法配置方案整合包,整合了雾凇拼音和空山五笔方案,做到了配置方案的一键安装,为初入中州韵输入法的小白提供便利。
Writing AI Conference Papers: A Handbook for Beginners
Pytorch Implementation of the sparse attention from the paper: "Generating Long Sequences with Sparse Transformers"
Using FlexAttention to compute attention with different masking patterns
An interactive NVIDIA-GPU process viewer and beyond, the one-stop solution for GPU process management.
A Pytorch implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE
The AdEMAMix Optimizer: Better, Faster, Older.
A PyTorch implementation of the Transformer model in "Attention is All You Need".
Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.
About Code release for "Flowformer: Linearizing Transformers with Conservation Flows" (ICML 2022), https://arxiv.org/pdf/2202.06258.pdf
Code for the paper, "Distribution Augmentation for Generative Modeling", ICML 2020.
A download tools for clawing the ebooks from internets.
The Fully Customizable Desktop Environment for Windows 10/11 with a windows tiling manager included.
Official implementation of "Implicit Neural Representations with Periodic Activation Functions"
[DEPRECATED] Repo for exploring multi-task learning approaches to learning sentence representations
Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton
A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models
PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538
A curated reading list of research in Mixture-of-Experts(MoE).
Copycat Clipboard is an intuitive clipboard manager designed to enhance your workflow. Seamlessly switch between documents, apps, and devices while keeping all your copied items organized and acces…