Quick Start
Available to: Students, Faculty, Researchers, Staff
Cost: No charge during the pilot phase
Visit TerrierGPT
TerrierGPT is designed to provide BU students, faculty, and staff with secure and flexible access to leading generative artificial intelligence models (GenAI) models. Using a LibreChat interface, TerrierGPT grants access to paid versions of several AI models including OpenAI GPT-4.1, Anthropic Claude, and Google Gemini, improving accuracy, though not perfect, while ensuring your inputs are not used to train these models. Be mindful of the data you input into these tools, they are approved for data classified as confidential, but not restricted use.
Best Practices
- Be aware that GenAI may hallucinate (give false or misleading information presented as fact).
- Review the output for accuracy and bias.
- GenAI is a tool to augment your work and not to replace it.
- GenAI should not be used to make decisions that should made by humans.
- Examine and understand ethical issues.
- Before inputting any personal data, ensure that your use complies with the law and BU policies.
Faculty and Staff
- Do not input HIPAA data or data classified as restricted use.
- Please add a statement to your syllabi clearly articulating acceptable and unacceptable uses of GenAI in your course and advise students to review this during the first week of the course.
Students
- Please review your course syllabus to understand how you can use GenAI in your work and any requirements, such as citing GenAI contributions.
- Please review the following policies:
Your Data
The GenAI providers have agreed not to use your inputs or the outputs generated, to train their AI models. They also agree that you own all of the output, or at least that they will not assert any ownership rights in such output. A very limited number of IS&T administrators may have access to the backend systems storing your information, bound by the Access to Electronic Information Policy to access this information only under specific warranted circumstances with proper approvals.
GenAI providers may use attributes, classifiers and metadata (information about how you interact with a product or service, including metrics like frequency, duration, and patterns of use), for things such as monitoring for abusive conduct or to improve their products. In some cases where abuse is flagged, GenAI providers may retain prompts and conduct a review of those prompts by a human.
Getting Started
Visit terriergpt.bu.edu to log in and start using the tool. It is available at no cost to you during the pilot phase.
Frequently Asked Questions
What features does TerrierGPT offer?
TerrierGPT offers several features, including:
- Chat interface for conversations with different LLMs.
- Storage of chat history and prompts.
- Semantic searching over files provided by the user through the UI.
- Upload and analyze images
- Build specialized, AI-driven agents that work with any supported model—no coding required.
- Switch between AI endpoints mid-chat
How does TerrierGPT compare to ChatGPT?
While TerrierGPT offers a similar chat experience to ChatGPT and other AI chat services, it differs in a number of key ways:
TerrierGPT allows you to interact with models from many different providers, including OpenAI, Anthropic, Amazon, and Meta. We hope to introduce more leading models soon.
TerrierGPT is approved for use with BU data up to Confidential due to our existing agreements with our service providers.
TerrierGPT does not support some features that ChatGPT and other services offer such as image generation, web search, and memory.
TerrierGPT is also not compatible with importing existing custom GPTs or chat histories from ChatGPT. In TerrierGPT, the equivalent to Custom GPTs would be creating Agents and giving specific system instructions, file attachments, and model parameters.
Additionally, TerrierGPT does not support the same concept of “memory” that is available in ChatGPT. TerrierGPT only passes along the context within a Chat and will not pass context between Chats.
Are there capacity limits on TerrierGPT?
Users start with 1 million credits in TerrierGPT. When your balance reaches zero, it will be reset to 1 million credits. Balance resets can occur once in a 24 hour period.
Different activities may consume varying amounts of credits. For example, a basic prompt to a small model may use a few hundred credits, whereas a complex prompt or analyzing a file or image with a more complex model may consume thousands to tens of thousands of credits.
You can check your credit balance by selecting your username in the lower left corner of the page.
How does the implementation of TerrierGPT align with the university's sustainability efforts??
TerrierGPT was implemented with careful consideration of sustainability concerns. It’s important to clarify that Boston University has not created or trained its own large language model (LLM) for this initiative. Instead, TerrierGPT leverages existing LLMs available on the market. By using pre-existing models, we have avoided the significant computational resources required to build and train LLMs from scratch, which is one of the primary contributors to their environmental impact.
Furthermore, sustainability in AI is an area of active research at BU. Several faculty members are investigating ways to reduce the environmental footprint of building and maintaining LLMs. Their work has the potential to shape a more sustainable AI landscape, ensuring that advancements in AI go hand-in-hand with responsible environmental stewardship.
By adopting TerrierGPT in this manner, we are moving forward with innovation while remaining mindful of our commitment to sustainability.
What are the limits on file uploads with TerrierGPT?
TerrierGPT allows users to upload files for analysis in their prompts. The limits for file uploads are;
- Maximum files per upload: 5
- Maximum size of each file: 10 MB
- Maximum total size of files in an upload: 20MB
How do I use Bookmarks in TerrierGPT?
Bookmarks in TerrierGPT act as a method for tagging and grouping chats.
- Bookmarks can be added by expanding the right pane, opening the Bookmarks section, and choosing New Bookmark.
- A conversation can be added to a Bookmark by clicking the bookmark icon next to the model name on a chat and choosing the Bookmark group you created.
- Chats can be filtered by Bookmark by choosing the Bookmark button in the left pane and choosing the Bookmark group
Bookmarks in TerrierGPT act as a method for tagging and grouping chats.
- Bookmarks can be added by expanding the right pane, opening the Bookmarks section, and choosing New Bookmark.A screenshot of a bookmark
- A conversation can be added to a Bookmark by clicking the bookmark icon next to the model name on a chat and choosing the Bookmark group you created.
- Chats can be filtered by Bookmark by choosing the Bookmark button in the left pane and choosing the Bookmark group
What AI models are integrated with TerrierGPT?
TerrierGPT integrates several leading foundational large language models (LLMs), including:
- OpenAI GPT-4.1 mini: Fast small model for focused tasks
- OpenAI GPT-4.1: High-intelligence model for complex tasks
- OpenAI o3-mini: Intelligent reasoning model for coding, math, and science
- Anthropic Claude 3.5 Haiku: Fast model for a wider variety of tasks
- Amazon Nova Lite 1.0: Highly optimized model for focused tasks
- Meta Llama 3.2: High-intelligence model for research and practical application
- Google Gemini 2.0 Flash Lite: Cost efficiency and low latency
- Google Gemini 2.0 Flash: Speed, thinking, and realtime streaming.
Does TerrierGPT offer API access to LLMs?
At this time, we are not offering API access to our LLMs through TerrierGPT.
How is my data handled in TerrierGPT?
Who has access to my data in TerrierGPT?
Like other services (email, file storage), only a very limited number of IS&T administrators have access to the backend systems storing this information. We are bound by University Policy to access this information only under specific warranted circumstances with proper approvals. This is documented in the
Access to Electronic Information Policy.
What Happens When a Chat is Marked as Temporary?
Temporary Chats do not appear in the chat history sidebar, are excluded from search results, and cannot be bookmarked. Temporary Chats are stored in the database for 30 days and then automatically deleted.
Can TerrierGPT generate images?
No, TerrierGPT only provides chat access to LLMs. While it can analyze uploaded images, it cannot generate images.
How do I create an Agent?
To create a new agent, select “Agents” from the endpoint menu and open the Agent Builder panel found in the Side Panel.
The creation form includes:
- Avatar: Upload a custom avatar to personalize your agent
- Name: Choose a distinctive name for your agent
- Description: Optional details about your agent’s purpose
- Instructions: System instructions that define your agent’s behavior
- Model: Select from available providers and models
Existing agents can be selected from the top dropdown of the Side Panel.
- Also by mention with ”@” in the chat input.
Model Configuration
The model parameters interface allows fine-tuning of your agent’s responses:
- Temperature (0-1 scale for response creativity)
- Max context tokens
- Max output tokens
- Additional provider-specific settings
Agent Capabilities
File Search
The File Search capability enables:
- RAG (Retrieval-Augmented Generation) functionality
- Semantic search across uploaded documents
- Context-aware responses based on file contents
- File attachment support at both agent and chat thread levels
Tools and Actions are not yet supported in TerrierGPT agents.
Why is the answer TerrierGPT gave me out of date or inaccurate?
It is important to note that TerrierGPT and Large Language Models (LLMs) are different from a search engine like Google, although some products like ChatGPT have begun to incorporate web search capabilities into their chat interfaces.
When you interact directly with an LLM you are asking it about the parameters it has been trained on. For the models currently in TerrierGPT, they mostly have a cutoff date that is around 18-24 months old. While they have been trained on billions of parameters, there is also no guarantee an LLM was trained on a specific web page. In the future, we do hope to introduce Web Search capabilities to TerrierGPT.
In general, TerrierGPT is meant to be an exploratory tool for interacting with AI and LLMs and comparing differences between models and providers.
What should I do if I encounter issues with TerrierGPT?
If you encounter any issues, you can reach out to the IT Help Center for support by emailing
ithelp@bu.edu or calling
(617) 353-HELP (4357). The IT Help Center can assist with troubleshooting and provide guidance on using TerrierGPT.
My profile picture is missing.
If TerrierGPT is not displaying your profile photo, try signing out and back in by following these steps:
Are there any limitations to using TerrierGPT?
Yes, there are some limitations:
- Multi-convo feature is disabled due to its experimental nature and current bugs.
- Prompt sharing and agent sharing are disabled initially.
- Presets feature is turned off as it overlaps with agents.
- Image generation is not supported.
How can I provide feedback on TerrierGPT?
You can provide feedback by contacting the IT Help Center by emailing
ithelp@bu.edu or calling
(617) 353-HELP (4357). Your feedback is valuable and helps improve the application.
When will newer models be available in TerrierGPT?
Model selection for TerrierGPT is a dynamic process driven by multiple considerations, including:
- API accessibility and cost
- Community interest and usage patterns
- Enterprise data security requirements and vendor agreements
- Development team resources and priorities
While we are committed to continuously expanding and evolving our model offerings, we cannot provide specific timelines for individual model additions. IS&T plans to regularly evaluate the AI landscape and will introduce new models that align with our goals of providing accessible, secure, and diverse generative AI capabilities.
Provide more details on the models used.
- OpenAI GPT-4.1-Mini
- Description: GPT-4.1-Mini is a lightweight version of OpenAI’s GPT-4.1 optimized for faster processing and lower computational demand. While it retains many of the core features of GPT-4.1, it sacrifices some depth in reasoning and contextual understanding.
- Strengths: Fast response time, efficient performance on smaller hardware, suitable for quick tasks or summarization. Its ability to streamline operations with smaller models makes it ideal for day-to-day applications in low-resource environments.
- Weaknesses: Struggles with complex tasks requiring deep contextual understanding or extensive reasoning. Sometimes produces overly simplistic outputs.
- Use Cases:
- Quick summarization of reading materials, basic content generation, or generating flashcards for student learning.
- Automating basic email replies, scheduling assistance, or developing simple FAQ bots.
- OpenAI GPT-4.1
- Description: A scaled and optimized version of GPT-4.1 designed for higher efficiency while retaining most of its robust reasoning and language capabilities. It offers balanced performance for both depth and speed.
- Strengths: Excellent for complex tasks such as data analysis, research synthesis, and philosophical or ethical reasoning.
- Weaknesses: May occasionally produce verbose responses or require iterative refinement for highly specific tasks. Slower than smaller models.
- Use Cases:
- Literature review, hypothesis generation, and summarizing scholarly articles
- Strategic planning (e.g., drafting mission statements), summarizing large datasets for decision-making, and writing complex documents
- OpenAI o3-Mini
- Description: A smaller-scale, task-specific language model designed to handle specific use cases with high efficiency. While less generalized than GPT-4.1 models, it excels in predefined or repetitive tasks.
- Strengths: Efficient and reliable when working within narrow scopes; requires less tuning for specific applications.
- Weaknesses: Limited in adaptability for tasks outside pre-optimized domains. Less useful for open-ended problem-solving.
- Use Cases:
- Quiz generation, managing and organizing student feedback, or producing simple study guides based on predefined templates.
- Handling form-generation tasks like applications, reports, or surveys, and supporting workflows tied to predictable inputs.
- Anthropic Claude 3.5 Haiku
- Description: Claude 3.5 Haiku is a compact version of Anthropic’s Claude tailored for creative and structured tasks. It values alignment and interpretability, offering safe and consistent outputs.
- Strengths: Excels at creative writing, generating concise insights, and maintaining ethical guardrails. Produces polished content with a focus on safety and precision.
- Weaknesses: May lack depth or specificity in highly technical or research-intensive tasks. Tends to err on the side of caution, which can limit exploratory or bold output.
- Use Cases:
- Assisting with creative projects, developing ethical case studies, or drafting discussion prompts for courses.
- Generating polished internal communications, mission-aligned document drafts, or ethical review processes.
- Amazon Nova Lite 1.0
- Description: Amazon’s Nova Lite 1.0 focuses on scalability and integration into existing systems rather than being a standalone powerhouse. Its strength lies in combining AI insights with operational workflows.
- Strengths: Seamlessly integrates with cloud-based tools, databases, and administrative systems, making it ideal for enhancing operational workflows. Efficient at task automation.
- Weaknesses: Limited creativity and depth in academic applications. Best suited for structured, well-defined tasks.
- Use Cases:
- Supplementing LMS platforms with automated grading and feedback on standardized tests.
- Streamlining operations such as resource allocation, enrollment data analysis, or classroom scheduling.
- Meta Llama 3.2
- Description: Meta’s Llama 3.2 is a state-of-the-art open-source LLM with a strong emphasis on customization and adaptability. It is particularly advanced in multilingual capabilities and complex reasoning.
- Strengths: Highly versatile and adaptable, allowing tuning for specific academic fields. Excels at multilingual tasks and technical research support. Open-source nature allows for greater control and privacy.
- Weaknesses: May require significant expertise for optimal integration and tuning. Computational demand is higher compared to more task-specific models.
- Use Cases:
- Supporting academic research with literature reviews, translations, and technical writing. Ideal for institutions working across global or multilingual contexts.
- Building custom applications for data analytics, resource optimization, or international program support.
- Google Gemini 2.0 Flash Lite
- Description: A highly efficient, ultra-lightweight version of Gemini 2.0 Flash optimized for speed and basic language tasks.
- Strengths: Designed for minimal latency and maximum efficiency, effective at handling simple educational content like flashcards, definitions, and routine question-answering.
- Weaknesses: Not well-suited for complex tasks or documents that require understanding multi-step reasoning. Performs less reliably in technical, scientific, or graduate-level subject areas.
- Use Cases:
- Ideal for generating practice questions, flashcards, and brief concept reviews.
- Google Gemini 2.0 Flash
- Description: Gemini 2.0 Flash offers a strong balance between performance and efficiency, capable of moderate reasoning and contextual tasks.
- Strengths: Better at interpreting structured data, charts, or multimedia inputs.
- Weaknesses: Not optimal for in-depth research tasks, thesis-level writing, or intricate domain-specific analysis. Can struggle with full-length research papers or multi-chapter readings without truncation or summarization.
- Use Cases:
- Can answer questions during lectures or guide students through exercises with contextual awareness.
- Help with drafting, rephrasing, or improving clarity in essays and reports.
Summary
- General Use: GPT-4.1, Llama 3.2, Gemini 2.0 Flash
- Creative and Ethical Work: Claude 3.5 Haiku
- Task-Specific Automation: OpenAI o3-Mini, Nova Lite 1.0
- Speed: GPT-4.1-Mini, Gemini 2.0 Flash Lite, Nova Lite 1.0
-
-