notion-meeting-intelligence
Meeting Intelligence
Prep meetings by pulling Notion context, tailoring agendas/pre-reads, and enriching with Codex research.
Quick start
- Confirm meeting goal, attendees, date/time, and decisions needed.
- Gather context: search with
Notion:notion-search, then fetch withNotion:notion-fetch(prior notes, specs, OKRs, decisions). - Pick the right template via
reference/template-selection-guide.md(status, decision, planning, retro, 1:1, brainstorming). - Draft agenda/pre-read in Notion with
Notion:notion-create-pages, embedding source links and owner/timeboxes. - Enrich with Codex research (industry insights, benchmarks, risks) and update the page with
Notion:notion-update-pageas plans change.
Workflow
0) If any MCP call fails because Notion MCP is not connected, pause and set it up:
- Add the Notion MCP:
codex mcp add notion --url https://mcp.notion.com/mcp
- Enable remote MCP client:
- Set
[features].rmcp_client = trueinconfig.tomlor runcodex --enable rmcp_client
- Set
- Log in with OAuth:
codex mcp login notion
More from cuba6112/skillfactory
ollama-rag
Build RAG systems with Ollama local + cloud models. Latest cloud models include DeepSeek-V3.2 (GPT-5 level), Qwen3-Coder-480B (1M context), MiniMax-M2. Use for document Q&A, knowledge bases, and agentic RAG. Covers LangChain, LlamaIndex, ChromaDB, and embedding models.
17unsloth-sft
Supervised fine-tuning using SFTTrainer, instruction formatting, and multi-turn dataset preparation with triggers like sft, instruction tuning, chat templates, sharegpt, alpaca, conversation_extension, and SFTTrainer.
6torchaudio
Audio signal processing library for PyTorch. Covers feature extraction (spectrograms, mel-scale), waveform manipulation, and GPU-accelerated data augmentation techniques. (torchaudio, melscale, spectrogram, pitchshift, specaugment, waveform, resample)
5pytorch-onnx
Exporting PyTorch models to ONNX format for cross-platform deployment. Includes handling dynamic axes, graph optimization in ONNX Runtime, and INT8 model quantization. (onnx, onnxruntime, torch.onnx.export, dynamic_axes, constant-folding, edge-deployment)
5unsloth-lora
Configuring and optimizing 16-bit Low-Rank Adaptation (LoRA) and Rank-Stabilized LoRA (rsLoRA) for efficient LLM fine-tuning using triggers like lora, qlora, rslora, rank selection, lora_alpha, lora_dropout, and target_modules.
4pytorch-quantization
Techniques for model size reduction and inference acceleration using INT8 quantization, including Post-Training Quantization (PTQ) and Quantization Aware Training (QAT). (quantization, int8, qat, fbgemm, qnnpack, ptq, dequantize)
3