引言:什么是NTU知识问答系统?

NTU知识问答系统(Nanyang Technological University Knowledge Question Answering)是指基于南洋理工大学(NTU)相关知识库构建的智能问答系统。这类系统通常结合了自然语言处理(NLP)、知识图谱和机器学习技术,旨在为用户提供准确、高效的学术和校园信息查询服务。

随着人工智能技术的快速发展,知识问答系统已经成为高校信息化建设的重要组成部分。NTU作为亚洲顶尖的研究型大学,其知识问答系统不仅涵盖了学术课程、研究项目、校园设施等基础信息,还涉及复杂的学术政策、研究资源和校园服务等多个维度。

第一部分:入门基础

1.1 理解NTU知识问答的核心概念

在开始构建或使用NTU知识问答系统之前,我们需要理解几个核心概念:

知识图谱(Knowledge Graph) 知识图谱是问答系统的”大脑”,它以结构化的方式存储实体及其关系。在NTU的语境下,知识图谱可能包含以下实体:

  • 人员(教授、学生、行政人员)
  • 机构(学院、系、研究中心)
  • 课程(本科、研究生课程)
  • 地点(教学楼、实验室、图书馆)
  • 事件(讲座、研讨会、校园活动)

自然语言理解(NLU) 自然语言理解是将用户的自然语言查询转换为系统可理解的结构化查询的过程。例如,当用户问”张教授在哪个系?”时,系统需要识别出”张教授”是实体,”在哪个系”是查询意图。

1.2 环境搭建与工具准备

如果您想自己搭建一个类似NTU知识问答的系统,以下是必要的工具和环境:

编程语言和框架

  • Python 3.8+
  • PyTorch 或 TensorFlow(用于深度学习模型)
  • spaCy 或 NLTK(用于自然语言处理)
  • Neo4j 或 Amazon Neptune(用于知识图谱存储)

开发环境

# 创建虚拟环境
python -m venv ntu_qa_env
source ntu_qa_env/bin/activate

# 安装核心依赖
pip install torch transformers spacy neo4j
pip install flask  # 用于构建Web服务
pip install networkx  # 用于图分析

1.3 数据收集与预处理

构建NTU知识问答系统的第一步是收集和整理数据。以下是数据收集的典型流程:

数据来源

  1. 官方文档:NTU官网、学院网站、课程手册
  2. 学术数据库:研究论文、项目报告
  3. 校园服务数据:图书馆目录、学生服务指南
  4. 人工整理:通过访谈和问卷收集FAQ

数据预处理示例代码

import pandas as pd
import re
from typing import List, Dict

class DataPreprocessor:
    def __init__(self):
        self.stop_words = set(['the', 'is', 'at', 'which', 'on', 'a', 'an'])
    
    def clean_text(self, text: str) -> str:
        """清洗文本,去除特殊字符和多余空格"""
        text = re.sub(r'[^\w\s]', '', text)
        text = re.sub(r'\s+', ' ', text)
        return text.strip().lower()
    
    def extract_entities(self, text: str) -> List[str]:
        """简单的实体提取(实际项目中应使用NER模型)"""
        # 这里使用简单的规则,实际应用中应使用spaCy或BERT-NER
        words = text.split()
        entities = []
        for word in words:
            if word[0].isupper() and len(word) > 2:
                entities.append(word)
        return entities
    
    def process_dataset(self, data: List[Dict]) -> List[Dict]:
        """处理整个数据集"""
        processed_data = []
        for item in data:
            processed_item = {
                'question': self.clean_text(item['question']),
                'answer': self.clean_text(item['answer']),
                'entities': self.extract_entities(item['question']),
                'category': item.get('category', 'general')
            }
            processed_data.append(processed_item)
        return processed_data

# 使用示例
preprocessor = DataPreprocessor()
sample_data = [
    {"question": "What is the tuition fee for Computer Science undergraduate program?",
     "answer": "The annual tuition fee for Computer Science undergraduate program is SGD 38,000.",
     "category": "admissions"},
    {"question": "How to apply for NTU research scholarship?",
     "answer": "You can apply through the NTU graduate admissions portal with required documents.",
     "category": "scholarship"}
]

processed = preprocessor.process_dataset(sample_data)
print(processed)

第二部分:核心技术详解

2.1 知识图谱构建

知识图谱是NTU问答系统的核心。以下是构建步骤:

步骤1:定义本体(Ontology)

# 定义NTU领域的本体结构
ntu_ontology = {
    "实体类型": {
        "Person": ["Professor", "Student", "Staff"],
        "Organization": ["School", "Department", "Research Center"],
        "Course": ["Undergraduate", "Graduate", "PhD"],
        "Location": ["Building", "Lab", "Library"],
        "Event": ["Seminar", "Workshop", "Exam"]
    },
    "关系类型": {
        "works_at": ["Person", "Organization"],
        "teaches": ["Person", "Course"],
        "located_in": ["Location", "Organization"],
        "participates_in": ["Person", "Event"]
    }
}

步骤2:构建知识图谱

from neo4j import GraphDatabase
import json

class NTUKnowledgeGraph:
    def __init__(self, uri, user, password):
        self.driver = GraphDatabase.driver(uri, auth=(user, password))
    
    def close(self):
        self.driver.close()
    
    def create_node(self, label, properties):
        """创建节点"""
        with self.driver.session() as session:
            query = f"""
            CREATE (n:{label} $props)
            RETURN id(n) as node_id
            """
            result = session.run(query, props=properties)
            return result.single()["node_id"]
    
    def create_relationship(self, from_id, to_id, rel_type, rel_props=None):
        """创建关系"""
        with self.driver.session() as session:
            query = f"""
            MATCH (a), (b)
            WHERE id(a) = $from_id AND id(b) = $to_id
            CREATE (a)-[r:{rel_type} $props]->(b)
            RETURN r
            """
            result = session.run(query, from_id=from_id, to_id=to_id, props=rel_props or {})
            return result
    
    def query(self, cypher_query, params=None):
        """执行Cypher查询"""
        with self.driver.session() as session:
            result = session.run(cypher_query, params or {})
            return [dict(record) for record in result]

# 使用示例
kg = NTUKnowledgeGraph("bolt://localhost:7687", "neo4j", "password")

# 创建教授节点
prof_id = kg.create_node("Professor", {
    "name": "Zhang Wei",
    "email": "zhang.wei@ntu.edu.sg",
    "department": "Computer Science",
    "research_area": "Artificial Intelligence"
})

# 创建课程节点
course_id = kg.create_node("Course", {
    "name": "CS410",
    "title": "Natural Language Processing",
    "credits": 3
})

# 创建关系
kg.create_relationship(prof_id, course_id, "teaches", {"semester": "2024 Fall"})

# 查询示例
result = kg.query("""
MATCH (p:Professor)-[:teaches]->(c:Course)
WHERE p.department = "Computer Science"
RETURN p.name, c.name
""")
print(result)

2.2 自然语言理解(NLU)模块

NLU模块负责将用户的自然语言问题转换为结构化查询。主要包含以下组件:

意图识别(Intent Classification)

from transformers import BertTokenizer, BertForSequenceClassification
import torch

class IntentClassifier:
    def __init__(self, model_path=None):
        self.tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
        if model_path:
            self.model = BertForSequenceClassification.from_pretrained(model_path)
        else:
            # 初始化一个简单的分类模型(实际项目中需要训练)
            self.model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=5)
        
        self.intent_labels = {
            0: "query_course_info",
            1: "query_admission",
            2: "query_scholarship",
            3: "query_faculty",
            4: "query_campus_info"
        }
    
    def predict(self, text: str) -> Dict:
        """预测意图"""
        inputs = self.tokenizer(text, return_tensors="pt", truncation=True, max_length=128)
        with torch.no_grad():
            outputs = self.model(**inputs)
            probabilities = torch.softmax(outputs.logits, dim=1)
            predicted_class = torch.argmax(probabilities, dim=1).item()
            confidence = probabilities[0][predicted_class].item()
        
        return {
            "intent": self.intent_labels[predicted_class],
            "confidence": confidence
        }

# 使用示例
classifier = IntentClassifier()
result = classifier.predict("What are the admission requirements for Computer Science?")
print(f"Intent: {result['intent']}, Confidence: {result['confidence']:.2f}")

实体识别(Named Entity Recognition)

import spacy

class EntityRecognizer:
    def __init__(self):
        # 加载spaCy模型
        self.nlp = spacy.load("en_core_web_sm")
        # 自定义NTU领域实体标签
        self.custom_entities = {
            "NTU": "ORG", "Nanyang Technological University": "ORG",
            "Computer Science": "DEPARTMENT", "CS": "DEPARTMENT",
            "Scholarship": "SCHOLARSHIP", "Admission": "ADMISSION"
        }
    
    def extract_entities(self, text: str) -> List[Dict]:
        """提取实体"""
        doc = self.nlp(text)
        entities = []
        
        # 标准spaCy实体
        for ent in doc.ents:
            entities.append({"text": ent.text, "label": ent.label_})
        
        # 自定义实体匹配
        for phrase, label in self.custom_entities.items():
            if phrase.lower() in text.lower():
                entities.append({"text": phrase, "label": label})
        
        return entities

# 使用示例
ner = EntityRecognizer()
text = "What is the tuition fee for Computer Science at NTU?"
entities = ner.extract_entities(text)
print(entities)
# 输出: [{'text': 'Computer Science', 'label': 'DEPARTMENT'}, {'text': 'NTU', 'label': 'ORG'}]

2.3 查询生成与执行

将NLU结果转换为知识图谱查询:

class QueryGenerator:
    def __init__(self, kg: NTUKnowledgeGraph):
        self.kg = kg
    
    def generate_cypher(self, intent: str, entities: List[Dict]) -> str:
        """根据意图和实体生成Cypher查询"""
        
        if intent == "query_course_info":
            # 查询课程信息
            course_entity = next((e for e in entities if e['label'] in ['COURSE', 'DEPARTMENT']), None)
            if course_entity:
                return f"""
                MATCH (c:Course)-[:OFFERED_BY]->(s:School)
                WHERE c.name CONTAINS '{course_entity['text']}' OR c.title CONTAINS '{course_entity['text']}'
                RETURN c.name as course_code, c.title as course_title, s.name as school
                """
        
        elif intent == "query_faculty":
            # 查询教师信息
            dept_entity = next((e for e in entities if e['label'] == 'DEPARTMENT'), None)
            if dept_entity:
                return f"""
                MATCH (p:Professor)-[:WORKS_AT]->(d:Department)
                WHERE d.name CONTAINS '{dept_entity['text']}'
                RETURN p.name as professor, p.email as email, p.research_area as research
                """
        
        elif intent == "query_admission":
            # 查询录取信息
            return """
            MATCH (a:AdmissionInfo)
            RETURN a.requirements as requirements, a.deadline as deadline
            """
        
        return ""
    
    def execute_query(self, intent: str, entities: List[Dict]) -> List[Dict]:
        """执行查询并返回结果"""
        cypher = self.generate_cypher(intent, entities)
        if not cypher:
            return [{"error": "无法生成有效查询"}]
        
        try:
            results = self.kg.query(cypher)
            return results if results else [{"info": "未找到相关信息"}]
        except Exception as e:
            return [{"error": f"查询执行失败: {str(e)}"}]

# 使用示例
query_gen = QueryGenerator(kg)
entities = [{"text": "Computer Science", "label": "DEPARTMENT"}]
results = query_gen.execute_query("query_faculty", entities)
print(results)

第三部分:高级应用与优化

3.1 语义搜索与向量检索

为了提高问答系统的准确性,可以引入向量检索技术:

from sentence_transformers import SentenceTransformer
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity

class SemanticSearch:
    def __init__(self):
        self.model = SentenceTransformer('all-MiniLM-L6-v2')
        self.qa_pairs = []  # 存储问答对
    
    def add_qa_pair(self, question: str, answer: str):
        """添加问答对"""
        embedding = self.model.encode(question)
        self.qa_pairs.append({
            "question": question,
            "answer": answer,
            "embedding": embedding
        })
    
    def search(self, query: str, top_k=3) -> List[Dict]:
        """语义搜索"""
        query_embedding = self.model.encode(query)
        
        similarities = []
        for pair in self.qa_pairs:
            sim = cosine_similarity(
                query_embedding.reshape(1, -1),
                pair['embedding'].reshape(1, -1)
            )[0][0]
            similarities.append({
                "question": pair['question'],
                "answer": pair['answer'],
                "similarity": sim
            })
        
        # 按相似度排序
        similarities.sort(key=lambda x: x['similarity'], reverse=True)
        return similarities[:top_k]

# 使用示例
semantic_search = SemanticSearch()
semantic_search.add_qa_pair(
    "What is the tuition fee for Computer Science?",
    "The annual tuition fee for Computer Science undergraduate program is SGD 38,000."
)
semantic_search.add_qa_pair(
    "How much does CS program cost?",
    "The annual tuition fee for Computer Science undergraduate program is SGD 38,000."
)

results = semantic_search.search("What is the cost of studying CS?")
print(results)

3.2 多轮对话管理

支持多轮对话需要维护对话状态:

class DialogueManager:
    def __init__(self):
        self.session_states = {}  # session_id -> state
    
    def update_state(self, session_id: str, entities: List[Dict], intent: str):
        """更新对话状态"""
        if session_id not in self.session_states:
            self.session_states[session_id] = {
                "entities": [],
                "intent_history": [],
                "context": {}
            }
        
        self.session_states[session_id]["entities"].extend(entities)
        self.session_states[session_id]["intent_history"].append(intent)
    
    def get_response(self, session_id: str, current_query: str) -> str:
        """根据当前状态生成响应"""
        state = self.session_states.get(session_id)
        if not state:
            return "请问您想了解什么信息?"
        
        # 检查是否需要澄清
        if len(state["entities"]) == 0:
            return "请提供更具体的信息,例如:'计算机科学系的教授有哪些?'"
        
        # 根据历史意图生成响应
        last_intent = state["intent_history"][-1] if state["intent_history"] else ""
        
        if last_intent == "query_course_info":
            return f"关于{state['entities'][0]['text']}的课程信息,您还想知道什么细节?"
        
        return "感谢您的提问,我正在为您查找相关信息..."

# 使用示例
dialogue_manager = DialogueManager()
session_id = "user123"

# 第一轮
dialogue_manager.update_state(session_id, [{"text": "Computer Science", "label": "DEPARTMENT"}], "query_faculty")
response1 = dialogue_manager.get_response(session_id, "Who are the professors in Computer Science?")
print(f"Bot: {response1}")

# 第二轮(多轮对话)
dialogue_manager.update_state(session_id, [{"text": "AI", "label": "RESEARCH_AREA"}], "query_research")
response2 = dialogue_manager.get_response(session_id, "Who works on AI?")
print(f"Bot: {response2}")

3.3 性能优化策略

缓存机制

import hashlib
import time
from functools import wraps

class QueryCache:
    def __init__(self, max_size=1000, ttl=3600):
        self.cache = {}
        self.max_size = max_size
        self.ttl = ttl
    
    def _make_key(self, query: str) -> str:
        return hashlib.md5(query.encode()).hexdigest()
    
    def get(self, query: str):
        key = self._make_key(query)
        if key in self.cache:
            entry = self.cache[key]
            if time.time() - entry['timestamp'] < self.ttl:
                return entry['result']
            else:
                del self.cache[key]
        return None
    
    def set(self, query: str, result):
        key = self._make_key(query)
        if len(self.cache) >= self.max_size:
            # 删除最旧的条目
            oldest_key = min(self.cache, key=lambda k: self.cache[k]['timestamp'])
            del self.cache[oldest_key]
        
        self.cache[key] = {
            'result': result,
            'timestamp': time.time()
        }

# 使用装饰器
def cache_decorator(cache: QueryCache):
    def decorator(func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            # 假设第一个参数是query
            query = args[0] if args else ""
            cached_result = cache.get(query)
            if cached_result is not None:
                return cached_result
            
            result = func(*args, **kwargs)
            cache.set(query, result)
            return result
        return wrapper
    return decorator

# 使用示例
cache = QueryCache()

@cache_decorator(cache)
def expensive_query(query: str):
    # 模拟耗时查询
    import time
    time.sleep(2)
    return f"Result for {query}"

# 第一次调用(耗时)
print(expensive_query("Computer Science courses"))
# 第二次调用(快速)
print(expensive_query("Computer Science courses"))

第四部分:常见问题全解析

4.1 数据质量与覆盖问题

问题1:知识图谱数据不完整怎么办?

解决方案:

  1. 数据增强:使用网络爬虫定期抓取NTU官网更新
  2. 众包标注:邀请学生和教职工贡献知识
  3. 主动学习:识别低置信度回答,优先收集这些数据
class DataQualityMonitor:
    def __init__(self):
        self.low_confidence_queries = []
    
    def log_low_confidence(self, query: str, confidence: float):
        """记录低置信度查询"""
        if confidence < 0.7:
            self.low_confidence_queries.append({
                "query": query,
                "timestamp": time.time(),
                "confidence": confidence
            })
    
    def get_retraining_candidates(self) -> List[Dict]:
        """获取需要重新训练的数据"""
        # 按查询频率排序
        from collections import Counter
        queries = [q['query'] for q in self.low_confidence_queries]
        return Counter(queries).most_common(10)

# 使用示例
monitor = DataQualityMonitor()
monitor.log_low_confidence("What is CS410 about?", 0.65)
monitor.log_low_confidence("CS410 details", 0.58)
print(monitor.get_retraining_candidates())

问题2:如何处理过时信息?

  • 版本控制:为知识图谱节点添加时间戳属性
  • 定期同步:设置定时任务同步官方数据源
  • 用户反馈:允许用户标记过时信息

4.2 性能与扩展性问题

问题3:系统响应慢怎么办?

优化策略:

  1. 异步处理:使用Celery等工具处理耗时任务
  2. 数据库优化:为知识图谱添加索引
  3. 负载均衡:部署多个实例
# 异步查询示例(使用asyncio)
import asyncio
from neo4j import AsyncGraphDatabase

class AsyncNTUKG:
    def __init__(self, uri, user, password):
        self.driver = AsyncGraphDatabase.driver(uri, auth=(user, password))
    
    async def query_async(self, cypher: str):
        async with self.driver.session() as session:
            result = await session.run(cypher)
            return [dict(record) async for record in result]

async def main():
    kg = AsyncNTUKG("bolt://localhost:7687", "neo4j", "password")
    results = await kg.query_async("MATCH (n) RETURN count(n) as count")
    print(results)
    await kg.driver.close()

# 运行
# asyncio.run(main())

问题4:如何支持更多用户?

  • 水平扩展:使用Kubernetes部署
  • 缓存层:Redis缓存热点数据
  1. CDN加速:静态资源使用CDN

4.3 准确性与可靠性问题

问题5:如何处理模糊查询?

解决方案:

  1. 模糊匹配:使用Levenshtein距离
  2. 同义词扩展:维护同义词词典
  3. 上下文理解:使用对话历史
import difflib

def fuzzy_match(query: str, candidates: List[str], threshold=0.6) -> List[str]:
    """模糊匹配"""
    matches = []
    for candidate in candidates:
        similarity = difflib.SequenceMatcher(None, query.lower(), candidate.lower()).ratio()
        if similarity >= threshold:
            matches.append((candidate, similarity))
    
    return [m[0] for m in sorted(matches, key=lambda x: x[1], reverse=True)]

# 使用示例
candidates = ["Computer Science", "Computer Engineering", "Computational Linguistics"]
query = "Comp Sci"
matches = fuzzy_match(query, candidates)
print(f"Matches: {matches}")

问题6:如何处理超出知识范围的问题?

  • 置信度阈值:设置回答置信度阈值(如<0.5时拒绝回答)
  • 转人工:将复杂问题转接给人工客服
  • 学习机制:记录未回答问题,定期更新知识库

4.4 部署与维护问题

问题7:如何监控系统健康状态?

import logging
from datetime import datetime

class SystemMonitor:
    def __init__(self):
        self.logger = logging.getLogger("NTU_QA_System")
        self.metrics = {
            "total_queries": 0,
            "successful_queries": 0,
            "failed_queries": 0,
            "avg_response_time": 0
        }
        self.response_times = []
    
    def log_query(self, query: str, response_time: float, success: bool):
        """记录查询日志"""
        self.metrics["total_queries"] += 1
        if success:
            self.metrics["successful_queries"] += 1
        else:
            self.metrics["failed_queries"] += 1
        
        self.response_times.append(response_time)
        self.metrics["avg_response_time"] = sum(self.response_times) / len(self.response_times)
        
        # 记录到日志
        status = "SUCCESS" if success else "FAILED"
        self.logger.info(f"[{datetime.now()}] {status} | Query: {query} | Time: {response_time:.2f}s")
    
    def get_health_report(self) -> Dict:
        """生成健康报告"""
        success_rate = (self.metrics["successful_queries"] / self.metrics["total_queries"] * 100) if self.metrics["total_queries"] > 0 else 0
        
        return {
            "timestamp": datetime.now().isoformat(),
            "metrics": self.metrics,
            "success_rate": f"{success_rate:.2f}%",
            "health_status": "HEALTHY" if success_rate > 95 else "WARNING" if success_rate > 80 else "CRITICAL"
        }

# 使用示例
monitor = SystemMonitor()
monitor.log_query("CS courses", 0.5, True)
monitor.log_query("invalid query", 2.0, False)
print(monitor.get_health_report())

问题8:如何进行A/B测试?

  • 流量分流:使用Nginx或API网关分流
  • 指标对比:对比回答准确率、响应时间
  • 用户反馈:收集用户满意度评分

第五部分:最佳实践与案例研究

5.1 成功案例:NTU课程推荐系统

背景:帮助学生根据兴趣和背景选择合适的课程

实现方案

  1. 用户画像:收集学生背景(GPA、先修课程、兴趣)
  2. 课程匹配:使用协同过滤+内容推荐
  3. 解释性:提供推荐理由
class CourseRecommender:
    def __init__(self, kg: NTUKnowledgeGraph):
        self.kg = kg
    
    def recommend_courses(self, student_profile: Dict, top_k=5) -> List[Dict]:
        """基于学生画像推荐课程"""
        
        # 查询学生先修课程
        prereq_query = f"""
        MATCH (s:Student)-[:TOOK]->(c:Course)
        WHERE s.id = '{student_profile['student_id']}'
        RETURN c.name as course_code
        """
        prereq_courses = self.kg.query(prereq_query)
        
        # 推荐相似难度的课程
        recommend_query = """
        MATCH (c:Course)-[:HAS_PREREQ]->(prereq:Course)
        WHERE prereq.name IN $prereq_list
        AND c.difficulty <= $max_difficulty
        RETURN c.name as course_code, c.title as title, c.credits as credits
        ORDER BY c.difficulty DESC
        LIMIT $top_k
        """
        
        results = self.kg.query(recommend_query, {
            "prereq_list": [c['course_code'] for c in prereq_courses],
            "max_difficulty": student_profile['max_difficulty'],
            "top_k": top_k
        })
        
        return results

# 使用示例
recommender = CourseRecommender(kg)
student = {
    "student_id": "U12345678A",
    "max_difficulty": 3.5,
    "interests": ["AI", "Data Science"]
}
recommendations = recommender.recommend_courses(student)
print(recommendations)

5.2 最佳实践总结

  1. 数据质量优先:确保知识库的准确性和时效性
  2. 渐进式开发:从FAQ开始,逐步扩展到复杂问答
  3. 用户中心设计:持续收集用户反馈
  4. 监控与迭代:建立完善的监控体系
  5. 安全与隐私:保护用户查询数据

第六部分:未来发展方向

6.1 技术趋势

  1. 大语言模型(LLM)集成:结合GPT-4等模型提升理解能力
  2. 多模态问答:支持图片、表格等多模态输入
  3. 个性化推荐:基于用户历史行为提供个性化答案
  4. 实时学习:在线学习新知识,无需重新训练

6.2 应用场景扩展

  • 智能助教:自动回答课程相关问题
  • 研究协作:匹配研究兴趣相似的师生
  • 校友服务:提供职业发展和校友网络查询

结语

构建一个高效的NTU知识问答系统是一个系统工程,需要结合知识图谱、自然语言处理和机器学习等多种技术。通过本文的指南,您应该已经了解了从入门到精通的完整流程,包括数据准备、核心算法、性能优化和常见问题解决方案。

记住,成功的问答系统不仅仅是技术实现,更重要的是持续优化和用户反馈。建议从小的MVP(最小可行产品)开始,逐步迭代,最终构建出真正有价值的智能问答系统。


附录:资源推荐

  1. Neo4j官方文档https://neo4j.com/docs/
  2. spaCy教程https://spacy.io/usage
  3. Hugging Face Transformershttps://huggingface.co/docs/transformers
  4. NTU官方信息https://www.ntu.edu.sg/

希望这份指南能帮助您在NTU知识问答系统的道路上少走弯路,快速构建出高质量的应用!