Skip to main content

Command documentation sourced from the linux-command project This comprehensive command reference is part of the linux-command documentation project.

column - Columnate lists

The column command is a powerful utility for formatting text into columns, creating clean, organized table layouts from input data. It reads input from files or standard input and arranges the data into evenly spaced columns, making it ideal for displaying lists, system information, and tabular data in a readable format. The command supports various input formats including space-separated, tab-separated, and custom delimiter-separated data, with options for controlling column width, sorting, and table formatting.

Basic Syntax

column [OPTIONS] [FILE...]

Common Options

Table Formatting

  • -t, --table - Create a table (determine columns based on input)
  • -c, --columns WIDTH - Specify output width in characters
  • -x, --fillrows - Fill rows before columns (horizontal layout)
  • -r, --repeat - Repeat header line for each page

Input/Output Separators

  • -s, --separator SEPARATORS - Specify input field separators (default: whitespace)
  • -o, --output-separator STRING - Specify output column separator (default: two spaces)

Formatting Options

  • -R, --right-columns COLUMNS - Right-align specified columns
  • -T, --table-columns COLUMNS - Define column types and alignment
  • -n, --table-name NAME - Set table name for header
  • -N, --table-noheadings - Don't print column headings
  • -E, --table-noextreme - Ignore very long lines

Display Options

  • -V, --version - Display version information
  • -h, --help - Show help message

Usage Examples

Basic Column Operations

Creating Simple Columns

# Basic column formatting from input list
printf "apple\nbanana\ncherry\ndate\n" | column

# Create 2-column output with specific width
printf "apple\nbanana\ncherry\ndate\n" | column -c 20

# Fill rows first (horizontal layout)
printf "apple\nbanana\ncherry\ndate\n" | column -x

# Custom output separator
printf "apple\nbanana\ncherry\ndate\n" | column -o " | "

# Multiple lines with custom spacing
printf "apple\nbanana\ncherry\ndate\nelderberry\nfig\ngrape\n" | column -c 30

# Format with tab separator output
printf "apple\nbanana\ncherry\ndate\n" | column -o $'\t'

Working with Files

# Format file contents into columns
column file.txt

# Create columns from multiple files
column file1.txt file2.txt

# Process standard input from directory listing
ls -1 | column

# Format command output
ls -la | awk '{print $1, $9}' | column -t

# Process piped data
ps aux | head -20 | column -t

Table Mode Operations

Creating Tables from Delimited Data

# Create table from comma-separated data
printf "Name,Age,City\nJohn,25,NYC\nJane,30,LA\n" | column -t -s ","

# Table with custom separators
printf "Name|Age|City\nJohn|25|NYC\nJane|30|LA\n" | column -t -s "|"

# Multiple input separators
printf "Name Age City\nJohn,25 NYC\nJane 30,LA\n" | column -t -s ", "

# Tab-separated data
printf "Name\tAge\tCity\nJohn\t25\tNYC\nJane\t30\tLA\n" | column -t -s $'\t'

Advanced Table Formatting

# Right-align specific columns
printf "Name,Salary\nJohn,50000\nJane,75000\n" | column -t -s "," -R 2

# Multiple right-aligned columns
printf "Name,Salary,Bonus\nJohn,50000,5000\nJane,75000,7500\n" | column -t -s "," -R 2,3

# Custom output separator for tables
printf "Name,Age,City\nJohn,25,NYC\nJane,30,LA\n" | column -t -s "," -o " "

# Wide output with specific column count
printf "Item1 Item2 Item3 Item4 Item5 Item6\n" | column -t -c 40

Data Processing Examples

System Information Display

# Format process list
ps aux | column -t

# Format mount information
mount | column -t

# Display environment variables
env | column -t -s "="

# Format network interfaces
ip addr show | column -t

# Display disk usage in columns
df -h | column -t

File and Directory Operations

# Create directory listing in columns
ls -la | column -t

# Format file permissions and sizes
find . -ls | column -t

# Display user information
cat /etc/passwd | column -t -s ":"

# Group information display
cat /etc/group | column -t -s ":"

# Package listing (Debian/Ubuntu)
dpkg -l | column -t

Advanced Formatting Techniques

Complex Data Tables

# Create formatted report from CSV data
cat data.csv | column -t -s "," -R 2,3

# Multi-line records with custom processing
printf "Record 1\nLine 1\nLine 2\nRecord 2\nLine 1\nLine 2\n" |
awk 'NR%3==1{if(NR>1)print ""; printf "%s ", $0} NR%3!=1{printf "%s ", $0} END{print ""}' |
column -t

# Financial data formatting
printf "Date,Description,Amount\n2024-01-01,Coffee,3.50\n2024-01-02,Gas,45.00\n" |
column -t -s "," -R 3

# Log file analysis
tail -100 /var/log/syslog | awk '{print $1, $2, $3, $5}' | column -t

Custom Layouts

# Create contact list
printf "John Doe\n555-1234\njohn@email.com\nJane Smith\n555-5678\njane@email.com\n" |
awk 'NR%3==1{name=$0} NR%3==2{phone=$0} NR%3==0{email=$0; printf "%s\t%s\t%s\n", name, phone, email}' |
column -t -s $'\t'

# Inventory display
printf "Item,Quantity,Price\nLaptop,5,999.99\nMouse,20,25.50\nKeyboard,15,75.00\n" |
column -t -s "," -R 2,3

# Student grades table
printf "Student,Math,Science,English\nAlice,85,92,88\nBob,78,85,80\nCharlie,92,88,95\n" |
column -t -s "," -R 2,3,4

Practical Examples

System Administration

Server Information Dashboard

# System resource summary
{
echo "RESOURCE USAGE VALUE"
echo "CPU Usage: $(top -bn1 | grep "Cpu(s)" | awk '{print $2}')"
echo "Memory Used: $(free -h | awk '/^Mem:/ {print $3}')"
echo "Disk Used: $(df -h / | awk 'NR==2 {print $3}')"
echo "Load Average: $(uptime | awk -F'load average:' '{print $2}')"
} | column -t -s ":"

# User sessions information
who | awk '{print $1, $2, $3, $4, $5}' | column -t

# Service status overview
systemctl list-units --type=service --state=running |
awk 'NR>1 {print $1, $2, $3, $4}' |
column -t -o " "

# Network connections summary
ss -tuln | awk 'NR>1 {print $1, $2, $3, $4, $5}' | column -t

Log Analysis and Reporting

# Error log summary
grep -i error /var/log/syslog | tail -20 |
awk '{print $1, $2, $3, $5, $6}' |
column -t

# Web server access summary
tail -1000 /var/log/apache2/access.log |
awk '{print $1, $7, $9}' |
sort | uniq -c |
sort -nr |
awk '{print $1, $2, $3}' |
column -t -R 1

# Failed login attempts
grep "Failed password" /var/log/auth.log |
awk '{print $1, $2, $3, $9}' |
sort | uniq -c |
sort -nr |
awk '{print $1, $2, $3, $4, $5}' |
column -t -R 1

Development and Data Processing

Code and Project Analysis

# File type distribution in project
find . -type f | awk -F. '{print $NF}' | sort | uniq -c | sort -nr |
awk '{print $1, $2}' | column -t -R 1

# Function count in source files
grep -r "^function\|^def " . --include="*.py" --include="*.js" |
awk -F: '{print $1}' |
sort | uniq -c | sort -nr |
awk '{print $1, $2}' | column -t -R 1

# Git commit statistics
git log --pretty=format:"%an,%ad,%s" --since="1 month ago" |
awk -F, '{print $1, $2, substr($3,1,30)}' |
sort | uniq -c | sort -nr |
awk '{print $1, $2, $3, $4}' | column -t -R 1

Database and Data Export

# Format SQL query results
mysql -u user -p database -e "SELECT name, email, created_at FROM users LIMIT 10;" |
column -t -s $'\t'

# CSV file preview
head -20 data.csv | column -t -s ","

# JSON data table view
cat data.json | jq -r '.[] | "\(.id),\(.name),\(.value)"' |
column -t -s "," -R 1,3

# Configuration file analysis
cat /etc/nginx/nginx.conf |
grep -E "^[^#].*{" |
awk '{print NR, $0}' |
column -t -R 1

Report Generation

Daily System Reports

# Create daily system report
{
echo "SYSTEM DAILY REPORT - $(date)"
echo "==============================="
echo ""
echo "DISK USAGE:"
df -h | awk 'NR>1 {print $1, $2, $3, $4, $5, $6}' | column -t
echo ""
echo "MEMORY USAGE:"
free -h | awk '/^Mem:/ {print "Total: " $2 " Used: " $3 " Free: " $4}'
echo ""
echo "TOP PROCESSES:"
ps aux --sort=-%cpu | head -10 |
awk '{print $1, $2, $3, $4, $11}' |
column -t
} > daily_report.txt

User Activity Reports

# User login activity
last | head -20 |
awk '{print $1, $2, $3, $4, $5}' |
column -t

# File modification tracking
find /home -mtime -7 -type f -exec ls -la {} \; |
awk '{print $1, $3, $5, $6, $7, $9}' |
column -t

# Command usage frequency
history | awk '{print $2}' | sort | uniq -c | sort -nr | head -20 |
awk '{print $1, $2}' | column -t -R 1

Advanced Usage

Pipeline Integration

Complex Data Processing

# Multi-step data processing pipeline
cat access.log |
awk '{print $1, $7, $9}' |
sort | uniq -c |
awk '$1 > 10 {print $1, $2, $3}' |
sort -nr |
column -t -R 1

# Real-time monitoring with formatting
watch -n 5 '
ps aux --sort=-%mem | head -10 |
awk "{print \$1, \$2, \$4, \$11}" |
column -t -R 4
'

# Log analysis with filtering and formatting
grep "ERROR" /var/log/app.log |
tail -50 |
awk -F[ "{print \$1, \$2}" |
sed "s/]//g" |
column -t -s " "
'

Custom Script Integration

#!/bin/bash
# Format server status script

format_status() {
echo "SERVER STATUS REPORT - $(date)"
echo "==============================="
echo ""

echo "UPTIME:"
uptime | awk '{print "System up:", $3, $4, $5}' | column -t -s ":"
echo ""

echo "LOAD AVERAGE:"
uptime | awk -F'load average:' '{print "1min:", $2, "5min:", $3, "15min:", $4}' |
tr -d "," | column -t

echo ""
echo "TOP PROCESSES BY CPU:"
ps aux --sort=-%cpu | head -5 |
awk 'NR>1 {print $1, $3, $4, $11}' |
column -t -R 2,3

echo ""
echo "TOP PROCESSES BY MEMORY:"
ps aux --sort=-%mem | head -5 |
awk 'NR>1 {print $1, $3, $4, $11}' |
column -t -R 2,3
}

format_status

Performance Optimization

Large Data Processing

# Process large files efficiently
split -l 10000 large_file.txt chunk_ # Split large file
for chunk in chunk_*; do
column -t < "$chunk" > "formatted_$chunk"
done
cat formatted_chunk_* > final_output.txt
rm chunk_* formatted_chunk_*

# Memory-efficient processing
awk -F, 'NR%1000==0 {print "."; fflush() } {print}' large_file.csv |
column -t -s "," > formatted_output.txt

# Parallel processing for multiple files
ls *.log | parallel -j 4 "column -t {} > formatted_{}.txt"

Batch Operations

# Format multiple log files
for log in /var/log/*.log; do
echo "Processing $log..."
tail -100 "$log" | column -t > "${log%.log}_formatted.txt"
done

# Directory comparison
diff -qr dir1 dir2 |
awk '{print $2, $4}' |
column -t -s " " > directory_comparison.txt

# Configuration file summary
for config in /etc/*.conf; do
echo "$config:"
grep -v "^#" "$config" | grep -v "^$" | wc -l
done | column -t -s ":" > config_summary.txt

Integration and Automation

Shell Script Examples

Automated Report Generator

#!/bin/bash
# Comprehensive system report generator

generate_report() {
local report_file="system_report_$(date +%Y%m%d_%H%M%S).txt"

{
echo "SYSTEM ANALYSIS REPORT"
echo "Generated on: $(date)"
echo "==============================="
echo ""

echo "SYSTEM INFORMATION:"
echo "------------------"
uname -a | tr ' ' '\n' | paste - - | column -t -s $'\t'
echo ""

echo "DISK USAGE:"
echo "-----------"
df -h | awk 'NR>1 {print $1, $2, $3, $4, $5, $6}' | column -t
echo ""

echo "MEMORY STATUS:"
echo "--------------"
free -h | grep -E "Mem|Swap" | awk '{print $1, $2, $3, $4}' | column -t
echo ""

echo "NETWORK INTERFACES:"
echo "------------------"
ip addr show | grep -E "inet|UP|DOWN" |
awk '{print $1, $2, $3, $4}' |
column -t

} > "$report_file"

echo "Report generated: $report_file"
}

generate_report

Log Analyzer Script

#!/bin/bash
# Log file analyzer with column formatting

analyze_logs() {
local log_pattern="$1"
local days="${2:-7}"

echo "LOG ANALYSIS REPORT - Last $days days"
echo "====================================="
echo ""

# Error analysis
echo "ERROR SUMMARY:"
find /var/log -name "*.log" -mtime -$days -exec grep -l "ERROR" {} \; |
while read logfile; do
error_count=$(grep -c "ERROR" "$logfile" 2>/dev/null || echo 0)
echo "$logfile $error_count"
done | column -t -R 2 -s " "

echo ""
echo "TOP ERROR MESSAGES:"
find /var/log -name "*.log" -mtime -$days -exec grep "ERROR" {} \; |
awk '{print $0}' | sort | uniq -c | sort -nr | head -10 |
awk '{print $1, substr($0, index($0,$2))}' |
column -t -R 1 -s " "
}

# Usage: ./log_analyzer.sh "ERROR" 7
analyze_logs "$1" "$2"

Cron Job Integration

Scheduled Reports

# Add to crontab for daily reports
# 0 8 * * * /path/to/daily_report.sh >> /var/log/reports.log 2>&1

#!/bin/bash
# Daily system report script
# File: /usr/local/bin/daily_report.sh

REPORT_DIR="/var/reports"
DATE=$(date +%Y%m%d)

mkdir -p "$REPORT_DIR"

{
echo "DAILY SYSTEM REPORT - $(date)"
echo "================================"
echo ""

echo "DISK USAGE:"
df -h | awk 'NR>1 {print $1, $5, $6}' | column -t
echo ""

echo "LOAD AVERAGE:"
uptime | awk -F'load average:' '{print $2}' | tr -d "," | column -t
echo ""

echo "MEMORY USAGE:"
free -m | awk '/^Mem:/ {print "Used:", $3, "MB Free:", $4, "MB"}' | column -t -s ":"

} > "$REPORT_DIR/system_report_$DATE.txt"

# Keep only last 30 days of reports
find "$REPORT_DIR" -name "system_report_*.txt" -mtime +30 -delete

Troubleshooting

Common Issues

Formatting Problems

# Problem: Columns not aligned properly
# Solution: Specify appropriate width
printf "Very long content\nShort\n" | column -c 20

# Problem: Input contains special characters
# Solution: Handle quotes and escaping
printf "Item \"Special\"\nAnother\n" | column -t

# Problem: Mixed delimiters in input
# Solution: Use multiple separators
printf "Name, Age City\nJohn,25 NYC\n" | column -t -s ", "

# Problem: Very long fields breaking layout
# Solution: Limit field width or use -E option
awk 'length($0) > 50 {print substr($0,1,50) "..."} {print}' file.txt | column -t

Performance Issues

# Problem: Processing very large files is slow
# Solution: Process in chunks
split -l 5000 large_file.txt part_
for part in part_*; do
column -t "$part" > "formatted_$part" &
done
wait
cat formatted_part_* > final_output.txt
rm part_* formatted_part_*

# Problem: Memory usage too high
# Solution: Use streaming with awk
awk '{print}' large_file.txt | column -t > output.txt

# Problem: Complex delimiter parsing
# Solution: Preprocess with sed/awk
cat data.txt | sed 's/|/,/g' | column -t -s ","

Debugging Techniques

Diagnostic Commands

# Check input before formatting
cat input.txt | hexdump -C | head -10

# Verify delimiter detection
grep -o "$separator" input.txt | head -10

# Test with small sample
head -5 large_file.txt | column -t

# Check output width
column -t file.txt | wc -L # Longest line length

# Debug alignment issues
column -t file.txt | cat -A # Show all characters

Common Error Solutions

# Error: "column: invalid option"
# Solution: Check version compatibility
column --version

# Error: Columns not aligning with tabs
# Solution: Expand tabs first
expand -t 4 file.txt | column -t

# Error: Input contains non-ASCII characters
# Solution: Set proper locale
LC_ALL=C column -t file.txt

# Error: Truncated output
# Solution: Increase terminal width or specify -c
column -t -c 200 file.txt
  • awk - Pattern scanning and text processing language
  • cut - Remove sections from each line of files
  • paste - Merge lines of files
  • pr - Convert text files for printing
  • fmt - Simple optimal text formatter
  • fold - Wrap each input line to fit in specified width
  • expand - Convert tabs to spaces
  • unexpand - Convert spaces to tabs
  • sort - Sort lines of text files
  • join - Join lines of two files on a common field

Best Practices

  1. Choose appropriate input separators based on your data format (comma, tab, space, or custom delimiters)
  2. Set proper column width using -c to ensure readable output on different terminal sizes
  3. Use table mode (-t) for structured data to maintain alignment based on column headers
  4. Right-align numeric columns using -R for better readability of numerical data
  5. Handle special characters properly when processing data that may contain quotes, commas, or other delimiters
  6. Test with sample data before processing large files to ensure correct formatting
  7. Use appropriate output separators to create visually appealing table layouts
  8. Consider terminal width when designing output to avoid wrapping or truncation
  9. Process large files efficiently using streaming or chunk processing to avoid memory issues
  10. Validate input format before applying column formatting to ensure consistent results

Performance Tips

  1. Table mode (-t) is generally more efficient for structured data than automatic column detection
  2. Pre-process with awk or sed for complex data transformations before using column
  3. Limit output width with -c to prevent excessive memory usage on very wide data
  4. Use streaming for large files to avoid loading entire datasets into memory
  5. Batch process multiple files using parallel processing when available
  6. Minimize delimiter complexity - simple delimiters process faster than complex regex patterns
  7. Filter data before formatting to reduce the amount of data column needs to process
  8. Use appropriate buffer sizes when integrating with other commands in pipelines
  9. Consider using expand before column for tab-heavy data to ensure consistent spacing
  10. Optimize terminal settings to handle large outputs efficiently (scrollback buffer, etc.)

Advanced Configuration

Environment Variables

# Set default column width
COLUMNS=80 column file.txt

# Use with locale settings
LC_ALL=C column -t file.txt

# Set default separator
COLUMN_SEPARATOR="|" column -t -s "|" file.txt

Terminal Integration

# Create shell alias for common column usage
alias coltable="column -t -s $'\t'"
alias colcsv="column -t -s ','"

# Function to format any delimited data
format_col() {
local delimiter="$1"
shift
column -t -s "$delimiter" "$@"
}

Custom Scripts

#!/bin/bash
# Smart column formatter - auto-detect delimiter

smart_column() {
local file="$1"

# Auto-detect common delimiters
if grep -q $'\t' "$file"; then
column -t -s $'\t' "$file"
elif grep -q ',' "$file"; then
column -t -s ',' "$file"
elif grep -q '|' "$file"; then
column -t -s '|' "$file"
else
column -t "$file"
fi
}

Real-world Applications

Data Science and Analysis

# Format experimental results
echo "Experiment,Result,Error" > results.csv
echo "Test1,85.2,2.1" >> results.csv
echo "Test2,92.7,1.8" >> results.csv
echo "Test3,78.4,3.2" >> results.csv
column -t -s ',' results.csv

# Compare multiple datasets
for dataset in data_*.csv; do
echo "=== $dataset ==="
head -10 "$dataset" | column -t -s ','
done

System Monitoring Dashboards

# Real-time system monitoring
watch -n 5 '
echo "=== SYSTEM STATUS ==="
echo "CPU and Memory:"
ps aux --sort=-%cpu | head -6 | awk "{print \$1, \$2, \$3, \$4, \$11}" | column -t
echo ""
echo "Disk Usage:"
df -h | grep -E "^/dev/" | column -t
echo ""
echo "Network Connections:"
ss -tuln | head -10 | column -t
'

Database Query Results

# Format MySQL query output
mysql -u user -p -e "
SELECT
name as 'Name',
email as 'Email',
created_at as 'Created'
FROM users
LIMIT 10;
" database | column -t -s $'\t'

# Format PostgreSQL results
psql -U user -d database -c "
SELECT name, email, created_at
FROM users
LIMIT 10;
" | column -t -s '|'

Integration with Modern Tools

JSON Processing

# Convert JSON to table format
cat data.json | jq -r '.[] | "\(.id),\(.name),\(.value)"' |
column -t -s ',' -R 1,3

# Complex JSON table extraction
curl -s "https://api.example.com/users" |
jq -r '.users[] | "\(.id),\(.name),\(.email),\(.city)"' |
column -t -s ',' -H

CSV Processing Pipeline

# Complete CSV analysis pipeline
analyze_csv() {
local csv_file="$1"

echo "=== File Information ==="
wc -l "$csv_file" | awk '{print "Lines:", $1}'

echo "=== Column Headers ==="
head -1 "$csv_file" | column -t -s ',' -o " | "

echo "=== Sample Data ==="
head -10 "$csv_file" | column -t -s ','

echo "=== Data Summary ==="
awk -F, 'NR>1 {for(i=1;i<=NF;i++) sum[i]+=$i}
END {for(i=1;i<=NF;i++) print "Column", i, "Sum:", sum[i]}' "$csv_file" |
column -t -s ' '
}

Log File Analysis

# Web server log analysis
analyze_apache_log() {
local log_file="$1"

echo "=== Top 10 IP Addresses ==="
awk '{print $1}' "$log_file" | sort | uniq -c | sort -nr | head -10 |
awk '{print $1, $2}' | column -t -R 1

echo "=== Top 10 Pages ==="
awk '{print $7}' "$log_file" | sort | uniq -c | sort -nr | head -10 |
awk '{print $1, $2}' | column -t -R 1

echo "=== Status Code Distribution ==="
awk '{print $9}' "$log_file" | sort | uniq -c | sort -nr |
awk '{print $1, $2}' | column -t -R 1
}

Troubleshooting Guide

Common Problems and Solutions

Issue: Columns not aligning properly

# Problem: Inconsistent spacing
# Solution: Use expand to normalize tabs first
expand -t 4 input.txt | column -t

# Problem: Mixed delimiters
# Solution: Standardize delimiters first
sed 's/[|;,]/,/g' input.txt | column -t -s ','

Issue: Memory errors with large files

# Problem: Out of memory processing
# Solution: Process in chunks
split -l 10000 large_file.txt chunk_
for chunk in chunk_*; do
column -t "$chunk" > "formatted_$chunk" &
done
wait
cat formatted_chunk_* > final_output.txt
rm chunk_* formatted_chunk_*

Issue: Special characters causing formatting problems

# Problem: Unicode characters breaking alignment
# Solution: Use proper locale settings
LC_ALL=C column -t file.txt

# Problem: Quotes and commas in data
# Solution: Handle CSV properly
cat file.csv | csvtool format "%s %s\n" - | column -t

The column command is an essential utility for transforming raw text data into readable, organized tables. Its ability to handle various input formats and customize output layout makes it invaluable for system administration, data analysis, and report generation tasks, providing clear, columnated output that enhances data readability and presentation.