flan-t5-large-grammar-synthesis Llama.cpp

I'm using fairydreaming/T5-branch, I'm not sure current llama-cpp-python server support t5

Model-Q6_K-GGUF, Reference1

0.1 2
0.1 1
1 100
1 2