While working on multiple web scraping projects, I often encountered challenges with text extraction. Python’s parsel is a powerful library for working with XPath and CSS selectors, but certain recurring issues like text encoding errors (e.g., café instead of café), unnecessary whitespace, and handling deeply nested or broken HTML structures often required additional manual cleanup.

To address these specific challenges, I developed parsel_text—a tool that builds on top of parsel while integrating BeautifulSoup for more flexible HTML parsing and ftfy for automatic encoding correction. The goal wasn’t to replace parsel, which excels at structured parsing, but to offer a specialized solution for cleaner, more uniform text extraction.

Key Improvements Introduced by parsel_text:

  • Encoding Fixes: ftfy ensures proper normalization, so text errors like café get automatically corrected to café.

  • Whitespace Cleanup: Automatic removal of redundant spaces and line breaks, reducing the need for manual .strip() or regex fixes.

  • Nested HTML Handling: BeautifulSoup allows for more forgiving parsing of deeply nested or imperfect HTML structures.


How to Use parsel_text :

from parsel import Selector
from parsel_text import get_xpath_text

html_content = """
<div id="content">
    <p>Hello, world!</p>
    <p>Welcome to the parsel_text library.</p>
</div>
"""

selector = Selector(text=html_content)
text = get_xpath_text(selector, xpath="//p/text()")
print(text)

Output:

Hello, world!
Welcome to the parsel_text library.

No additional loops or cleaning steps needed—parsel_text delivers the cleaned text directly.


When Should You Use parsel_text?

Consider using parsel_text if:

  • ✅ You frequently scrape text from websites with messy or inconsistent HTM.

  • ✅ You want automatic handling of encoding issues and whitespace cleanup.

  • ✅ You prefer a single tool that simplifies text extraction.

The standard parsel library is an excellent choice on its own. parsel_text is designed for situations where additional text cleanup and HTML flexibility are required.


Performance Considerations:

It’s important to note that `parsel_text` is slightly heavier and slower than `parsel` alone because it leverages both `BeautifulSoup` and `ftfy` for enhanced text processing. This added complexity provides better text quality but can impact performance when scraping large datasets. For simple, well-structured pages, `parsel` might be sufficient.


Additional Features:

  • Row-wise Extraction: The get_results_row_text function provides a list of cleaned text results receiving as input a parsel.Selector object.

  • Direct BeautifulSoup Extraction: The get_bs4_soup_text function allows extracting text directly from a BeautifulSoup object, useful when you already have parsed HTML.

  • Optional Mojibake Fixing: The fix_mojibake parameter lets you toggle automatic text correction, giving you more control.


Explore More:

- [Official Parsel Documentation](https://parsel.readthedocs.io)

- [Check out the README on GitHub](https://github.com/carlosplanchon/parsel_text)

Happy scraping!