• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

wesm / arrow / 1754 / 8
0%
master: 0%

Build:
DEFAULT BRANCH: master
Ran 11 Nov 2019 06:42PM UTC
Files 12
Run time 1s
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

19 Jul 2017 12:16PM UTC coverage: 66.265% (-10.4%) from 76.633%
2.2

Pull #21

travis-ci

wesm
ARROW-1167: [Python] Support chunking string columns in Table.from_pandas

This resolves the error with converting the dataset in ARROW-1167, which only takes up 4.5 GB in memory but has a single column with over 2GB in binary data.

The unit test for this is not run in CI because of large memory allocation, but can be run with

```
py.test pyarrow --large_memory
```

cc @jeffknupp

Author: Wes McKinney <wes.mckinney@twosigma.com>

Closes #867 from wesm/ARROW-1167 and squashes the following commits:

dae62326 [Wes McKinney] cpplint
dcdec91a [Wes McKinney] Support ChunkedArray outputs of Array.from_pandas
150e9fc9 [Wes McKinney] Produced ChunkedArray when exceeding 2GB in a single BinaryArray column
707555f8 [Wes McKinney] Split up pandas_convert, make PandasObjectsToArrow return ChunkedArray to accommodate large string data
Pull Request #21:

15 of 56 branches covered (26.79%)

Branch coverage included in aggregate %.

260 of 359 relevant lines covered (72.42%)

0.86 hits per line

Source Files on job 1754.8 (2.2)
  • Tree
  • List 0
  • Changed 0
  • Source Changed 0
  • Coverage Changed 0
Coverage ∆ File Lines Relevant Covered Missed Hits/Line Branch Hits Branch Misses
  • Back to Build 2476
  • Travis Job 1754.8
  • 2c5b412c on github
  • Prev Job for on master (#1607.1)
  • Next Job for BUILD_SYSTEM=meson BUILD_TORCH_EXAMPLE=no on master (#2748.8)
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc