Introduction
Grafana: An open-source platform specialized in data visualization and monitoring. It connects to various data sources (like Prometheus, MySQL, ElasticSearch, and Loki) to create beautiful dashboards and set up automatic alerts.
Grafana Loki: A log aggregation system designed by Grafana Labs. If Prometheus is the standard for metrics, Loki is "Prometheus for logs." Instead of indexing the entire log content, Loki only indexes metadata (labels), making it incredibly lightweight and resource-efficient.
Why Use Grafana Loki for Log Management?
Loki offers several major advantages over traditional solutions like the ELK Stack (Elasticsearch - Logstash - Kibana):
- Cost-Efficient Storage: Since it only indexes labels instead of full text, Loki’s index size is much smaller (often 10x smaller than Elasticsearch). You can store logs cheaply on services like AWS S3 or Google Cloud Storage.
- High Performance & Low Resource Usage: Loki doesn’t need massive CPU or RAM to manage huge indexes. It runs smoothly even on small servers or Kubernetes clusters.
- Unified Ecosystem: If you already use Grafana for metrics, adding Loki lets you manage Metrics, Logs, and Traces in one place. You can jump from an error chart (metric) to specific log lines with just one click.
- Powerful LogQL Query Language: It uses LogQL, which is very similar to Prometheus's PromQL. This makes it easy to filter logs by labels and perform calculations (e.g., counting how many "500 errors" occurred in the last 5 minutes).
- Cloud-Native Support: Loki is built for Docker and Kubernetes. It automatically pulls metadata from Pods, making it simple to track errors by Service or Namespace.
Summary: Grafana Loki is the "Good, Cheap, and Fast" solution for DevOps teams who want a lean, highly integrated logging system without the complexity of heavy full-text search engines.
Deployment with Docker
First, create a NestJS project and implement a simple controller:
import {
BadRequestException,
Body,
Controller,
Get,
InternalServerErrorException,
Logger,
} from '@nestjs/common'
@Controller('test')
export class TestController {
private readonly logger = new Logger(TestController.name)
@Get('test-1')
async test1() {
this.logger.log('test1')
return 'test-1'
}
@Get('test-2')
async test2() {
this.logger.log('test2')
return this.tmpFunc()
}
@Get('test-3')
async test3(@Body() dto: Req) {
this.logger.log('test3')
this.logger.error('test3')
throw new InternalServerErrorException({
dto,
message: 'InternalServerErrorException test3',
})
}
@Get('test-4')
async test4(@Body() dto: Req) {
this.logger.log('test4')
return {dto, message: 'Completed query'}
}
tmpFunc() {
this.logger.error('tmpFunc')
throw new BadRequestException('BadRequestException tmpFunc')
}
}
interface Req {
value1: string
value2: number
}
Next, create a logging.interceptor.ts file. This interceptor captures the request and response data to make tracking easier. Note: If your response data is too large, it might clutter your logs. You might want to only log the full body when an error is thrown.
import {
Injectable,
NestInterceptor,
ExecutionContext,
CallHandler,
Logger,
} from '@nestjs/common'
import {Observable} from 'rxjs'
import {tap} from 'rxjs/operators'
@Injectable()
export class LoggingInterceptor implements NestInterceptor {
private readonly logger = new Logger('HTTP')
intercept(context: ExecutionContext, next: CallHandler): Observable<any> {
const request = context.switchToHttp().getRequest()
const {method, url, body} = request
const now = Date.now()
return next.handle().pipe(
tap(responseBody => {
const delay = Date.now() - now
const logData = {
url,
method,
payload: body,
response: responseBody,
duration: `${delay}ms`,
}
this.logger.log(JSON.stringify(logData))
})
)
}
}
Install the necessary packages:
yarn add winston winston-daily-rotate-file nest-winston
Update your main.ts file to use Winston for daily log rotation:
import { NestFactory } from '@nestjs/core'
import { WinstonModule } from 'nest-winston'
import * as winston from 'winston'
import 'winston-daily-rotate-file'
import { AppModule } from './app.module'
import { LoggingInterceptor } from './logging.interceptor'
async function bootstrap() {
const app = await NestFactory.create(AppModule, {
logger: WinstonModule.createLogger({
transports: [
new winston.transports.DailyRotateFile({
filename: 'logs/%DATE%.log',
datePattern: 'YYYY-MM-DD',
zippedArchive: true,
maxSize: '20m',
maxFiles: '30d',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.json()
),
}),
new winston.transports.Console({
format: winston.format.combine(
winston.format.timestamp(),
winston.format.ms(),
winston.format.errors({stack: true}),
winston.format.colorize(),
winston.format.printf(
({timestamp, level, message, context, ms, stack}) => {
const logMessage = stack || message
return `${timestamp} ${level} [${context || 'App'}] ${logMessage} ${ms}`
}
)
),
}),
],
}),
})
app.useGlobalInterceptors(new LoggingInterceptor())
await app.listen(process.env.PORT ?? 3000)
}
bootstrap()
- maxSize: Maximum size of a single log file.
- maxFiles: How long to keep old logs before deleting them.
- winston.format.json(): Formats logs as JSON so they are easy for the log server to parse.
Create a Dockerfile for your NestJS project:
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["npm", "run", "start:prod"]
Create the loki-config.yml file.
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
common:
instance_addr: 127.0.0.1
path_prefix: /tmp/loki
storage:
filesystem:
chunks_directory: /tmp/loki/chunks
rules_directory: /tmp/loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
schema_config:
configs:
- from: 2024-01-01
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
storage_config:
filesystem:
directory: /tmp/loki/chunks
limits_config:
allow_structured_metadata: true
reject_old_samples: true
reject_old_samples_max_age: 168h
table_manager:
retention_deletes_enabled: false
retention_period: 0s
Create the promtail-config.yml file.
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: nestjs_logs
static_configs:
- targets:
- localhost
labels:
job: nestjs-app
__path__: /var/log/nestjs/*.log
Finally, set up your docker-compose.yml:
version: '3.8'
services:
nestjs-app:
build: .
volumes:
- ./logs:/app/logs
ports:
- "3000:3000"
networks:
- monitoring
loki:
image: grafana/loki:latest
ports:
- "3100:3100"
volumes:
- ./loki-config.yml:/etc/loki/local-config.yaml
command: -config.file=/etc/loki/local-config.yaml
networks:
- monitoring
promtail:
image: grafana/promtail:latest
volumes:
- ./logs:/var/log/nestjs
- ./promtail-config.yml:/etc/promtail/config.yml
command: -config.file=/etc/promtail/config.yml
networks:
- monitoring
grafana:
image: grafana/grafana:latest
ports:
- "3001:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
networks:
- monitoring
networks:
monitoring:
driver: bridge
Start everything up with:
Log in to Grafana at localhost:3001 (Default: admin / admin).
Add Loki as a Data Source.
Call your APIs and head over to Drilldown > Logs to see your live data!
Happy coding!
See more articles here.
Comments
Post a Comment