Processing...

WIBA: What Is Being Argued?

Advanced argument detection and analysis tool for researchers, educators, and analysts.

Create Account

Fields marked with an asterisk (*) are required
Please enter your full name (at least 2 characters)
Please enter a valid email address
Password strength:
At least 8 characters
At least one uppercase letter
At least one lowercase letter
At least one number
At least one special character (!@#$%^&*)
Passwords do not match
Please enter your organization name (at least 2 characters)
Please describe your intended use case for WIBA (e.g., research, education, analysis)
Please provide a detailed description (at least 10 characters)
0 / 10 characters minimum

Installation

Important: Before using WIBA, you'll need to create an account to get your API token.

Once you have your API token, install the WIBA client using pip:

pip install wiba

Initialize the client with your API token:

from wiba import WIBA

# Get your API token from the Account tab after registration
analyzer = WIBA(api_token="your_api_token_here")

Don't have an account yet? Register here to get your API token.

Quick Start

Note: Make sure you have created an account and obtained your API token before starting.

Here's a simple example to get started with WIBA:

from wiba import WIBA

# Initialize client
analyzer = WIBA(api_token="your_api_token_here")

# Example text
text = "Climate change is real because global temperatures are rising."

# Detect if it's an argument
result = analyzer.detect(text)
print(f"Argument detected: {result.argument_prediction}")
print(f"Confidence: {result.confidence_score}")

Detect Arguments

The detect() method identifies whether a text contains an argument:

# Single text
result = analyzer.detect("Climate change is real because temperatures are rising.")
print(result.argument_prediction)  # "Argument" or "NoArgument"
print(result.confidence_score)     # Confidence score between 0 and 1

# Multiple texts
texts = [
    "Climate change is real because temperatures are rising.",
    "This is just a simple statement without any argument."
]
results = analyzer.detect(texts)
for r in results:
    print(f"Text: {r.text}")
    print(f"Prediction: {r.argument_prediction}")

# Using DataFrame
import pandas as pd
df = pd.DataFrame({'text': texts})
results_df = analyzer.detect(df, text_column='text')

Extract Topics

The extract() method identifies the main topic being argued about:

# Single text
result = analyzer.extract("Climate change is a serious issue because it affects our environment.")
print(result.topics)  # List of extracted topics

# Multiple texts
texts = [
    "Climate change is a serious issue because it affects our environment.",
    "We need better healthcare because current systems are inadequate."
]
results = analyzer.extract(texts)
for r in results:
    print(f"Text: {r.text}")
    print(f"Topics: {r.topics}")

# Using DataFrame
df = pd.DataFrame({'text': texts})
results_df = analyzer.extract(df, text_column='text')

Analyze Stance

The stance() method determines the stance towards a specific topic:

# Single text
text = "We must take action on climate change because the evidence is overwhelming."
topic = "climate change"
result = analyzer.stance(text, topic)
print(f"Stance: {result.stance}")  # "Favor", "Against", or "NoArgument"

# Multiple texts
texts = [
    "We must take action on climate change because the evidence is overwhelming.",
    "Climate change policies will harm the economy and cost jobs."
]
topics = ["climate change", "climate change"]
results = analyzer.stance(texts, topics)
for r in results:
    print(f"Text: {r.text}")
    print(f"Topic: {r.topic}")
    print(f"Stance: {r.stance}")

# Using DataFrame
df = pd.DataFrame({
    'text': texts,
    'topic': topics
})
results_df = analyzer.stance(df, text_column='text', topic_column='topic')

Discover Arguments

The discover_arguments() method finds argumentative segments in longer texts:

# Single text
text = """Climate change is a serious issue. Global temperatures are rising at an 
unprecedented rate. This is causing extreme weather events. However, some argue 
that natural climate cycles are responsible."""

results_df = analyzer.discover_arguments(
    text,
    window_size=2,  # Number of sentences per window
    step_size=1     # Number of sentences to move window
)
print(results_df[['text_segment', 'argument_prediction', 'argument_confidence']])

# Using DataFrame
df = pd.DataFrame({'text': [text1, text2]})
results_df = analyzer.discover_arguments(
    df,
    text_column='text',
    window_size=2,
    step_size=1
)

Batch Processing

All methods support batch processing for efficient handling of multiple texts:

# Process large datasets
import pandas as pd

# Read data
df = pd.read_csv('texts.csv')

# Process in batches
results_df = analyzer.detect(
    df,
    text_column='text',
    batch_size=100,    # Number of texts per batch
    show_progress=True # Show progress bar
)

# Save results
analyzer.save_results(results_df, 'results.csv', format='csv')

Error Handling

WIBA provides robust error handling and validation:

from wiba import ValidationError, WIBAError

try:
    # Missing required column
    bad_df = pd.DataFrame({'wrong_column': ['test']})
    analyzer.detect(bad_df)
except ValidationError as e:
    print(f"Validation error: {str(e)}")

try:
    # Empty DataFrame
    empty_df = pd.DataFrame({'text': []})
    analyzer.detect(empty_df)
except ValidationError as e:
    print(f"Validation error: {str(e)}")

try:
    # Invalid stance input
    analyzer.stance("test text", None)
except ValidationError as e:
    print(f"Validation error: {str(e)}")

# Handle API errors
try:
    result = analyzer.detect("some text")
except WIBAError as e:
    print(f"API error: {str(e)}")

Performance

Last Updated: 2/6/2025

Note: Recommended usage for Detect, Extract, and Stance methods is ~100 words per sample. Extract endpoint is limited to 7,000 words, but behavior may be unexpected at this length.

Model Speeds

WIBA-Extract 282 samples/sec
WIBA-Detect 8 samples/sec
WIBA-Stance 8 samples/sec
WIBA-Discover 4 samples/sec*

* WIBA-Discover speed is approximate as rows can contain multiple sentences.

Model Performance

WIBA-Extract
F1: 73.3%
Latest Model Updated: 2/6/25
WIBA-Detect
F1: 82.23%
Latest Model Updated: 2/6/25
WIBA-Stance
F1: 71.26%
Latest Model Updated: 11/2/25
Performance Update: Our team is actively investigating and implementing optimizations to improve method speeds. Stay tuned for updates!

Argument Mining Resources

Research Paper

WIBA: What Is Being Argued? A Comprehensive Approach to Argument Mining

Arman Irani, Ju Yeon Park, Kevin Esterling, Michalis Faloutsos (2024)

A novel framework and suite of methods that enable the comprehensive understanding of "What Is Being Argued" across contexts. The approach develops a comprehensive framework that detects: (a) the existence, (b) the topic, and (c) the stance of an argument, achieving F1 scores of 79-86% for argument detection, 71% similarity for topic identification, and 71-78% for stance classification across diverse benchmark datasets.

Research Paper

Terminal Veracity: How Russian Propaganda Uses Telegram to Manufacture 'Objectivity' on the Battlefield

Mark W. Perry, Arman Irani (2023)

This article investigates over 130,000 Telegram messages, 15,000 Telegram forwards, and 750 news articles from Russian-affiliated media to assess the information supply chain between Russian media and Telegram channels covering the war in Ukraine. Using machine-learning techniques, this research provides a framework for conducting argument and network analysis for disambiguating narratives, channels, and users, and mapping dissemination pathways of influence operations. The findings indicate that a central feature of Russian war reporting is actually the prevalence of neutral, non-argumentative language. Moreover, dissemination patterns between media sites and Telegram channels reveal a well-cited information laundering network with a distinct supply chain of covert, semi-covert, and overt channel types active at seed, copy, and amplification levels of operation.

Journal of Information Warfare View Journal
Research Paper

ArguSense: Argument-Centric Analysis of Online Discourse

Arman Irani, Michalis Faloutsos, Kevin Esterling (2024)

A comprehensive framework for analyzing arguments in online forums, featuring unsupervised topic detection, argument visualization, and content quantification through similarity and clustering algorithms. The study demonstrates its effectiveness through analysis of GMO-related discussions across Reddit communities.

Research Paper

Overview of DialAM-2024: Argument Mining in Natural Language Dialogues

Ramon Ruiz-Dolz, John Lawrence, Ella Schad, Chris Reed (2024)

First shared task in dialogical argument mining, exploring the integration of argumentative relations and speech illocutions in a unified framework. The study presents results from six teams working on identifying propositional and illocutionary relations in argument maps.

Research Paper

Detecting Argumentative Fallacies in the Wild

Ramon Ruiz-Dolz, John Lawrence (2023)

A groundbreaking analysis of the limitations of data-driven approaches in real-world argument mining scenarios. The study introduces a validation corpus for natural language argumentation schemes and provides crucial insights for deploying argument mining systems in practical applications.

Upcoming Conference

The 12th Workshop on Argument Mining (ArgMining 2025)

July 31st or August 1st, 2025 | Vienna, Austria

A premier workshop co-located with ACL 2025, focusing on computational linguistics and argument mining. The workshop aims to broaden its scope by incorporating perspectives from social science, psychology, and humanities while creating synergies between argument mining and natural language reasoning.

Dataset

DialAM-2024 Dataset

Ruiz-Dolz et al. (2024)

A comprehensive dataset for dialogical argument mining, featuring annotated natural language dialogues with both argumentative relations and speech illocutions. Perfect for developing and evaluating dialogue-based argument mining systems.

Dataset

UKP Sentential Argument Mining Corpus

UKP Lab, TU Darmstadt

A large-scale argument mining corpus containing over 25,000 annotated arguments from heterogeneous sources. Perfect for training and evaluating argument mining systems.

Tutorial

Getting Started with WIBA

WIBA Team

A comprehensive guide to using WIBA for argument analysis. Learn how to analyze texts, visualize arguments, and interpret results using our web-based platform.