pandas df.loc[z,x]=y 如何提高速度?

pandas df.loc[z,x]=y how to improve speed?(pandas df.loc[z,x]=y 如何提高速度?)
本文介绍了pandas df.loc[z,x]=y 如何提高速度?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

问题描述

我已经确定了一个 pandas 命令

timeseries.loc[z, x] = y

负责迭代中花费的大部分时间.现在我正在寻找更好的方法来加速它.循环甚至不包括 50k 个元素(生产目标是 ~250k 或更多),但已经需要一个悲伤的 20 秒.

这是我的代码(忽略上半部分,它只是计时助手)

def populateTimeseriesTable(df, observable, timeseries):"""遍历 df 的所有行和将 observable 放入时间序列在正确的行(符号)、列(tsMean)."""print "len(df.index)=", len(df.index) # 显示行数全球 bf, tbf = time.time() # 将之前"设置为现在t = dict([(i,0) for i in range(5)]) # 用零填充类别时间定义 T(i):"""计时助手:将经过的时间添加到类别i".然后将之前"设置为现在."""全球 bf, tt[i] = t[i] + (time.time()-bf)bf = time.time()for i in df.index: # 这是慢循环bf = time.time()sym = df["符号"][i]T(0)tsMean = df["tsMean"][i]T(1)tsMean = tsFormatter(tsMean)T(2)o = df[可观察的][i]T(3)timeseries.loc[sym, tsMean] = oT(4)从 pprint 导入 pprint打印每个命令所需的时间(总计 = %.1f 秒):" % sum(t.values())pprint (t)返回时间序列

有(不重要,不慢)

def tsFormatter(ts):作为人类可读的字符串,最多只有整秒"return time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime(ts))

..

--> 待优化代码在for循环中.

(T, 和 t 只是辅助函数和字典,用于计时.)

我已经为每一步计时.绝大多数时间:

len(df.index)= 47160每个命令所需的时间(总计 = 20.2 秒):{0: 1.102,1:0.741,2:0.243,3:0.792,4:17.371}

花费在最后一步

timeseries.loc[sym, tsMean] = o

我已经下载并安装了 pypy - 但遗憾的是,它还不支持 pandas.

任何想法如何加快填充二维数组?

谢谢!

<小时>

抱歉,没有提到 - 'timeseries' 也是一个数据框:

timeseries = pd.DataFrame({"name":titles}, index=index)

解决方案

更新: 从 Pandas 0.20.1 开始

时序数据作为 DF:

在[88]中:r出[88]:方法计时0 位置 502.01 iloc 394.02 在 66.83 美元 32.94 ix_label 64.85 ix_integer 503.0在 [89] 中:r.to_dict()出[89]:{'方法':{0:'loc',1:'iloc',2:'在',3:'iat',4: 'ix_label',5: 'ix_integer'},时间":{0:502.0,1:394.0,2: 66.799999999999997,3: 32.899999999999999,4: 64.799999999999997,5:503.0}}

绘图

ax = sns.barplot(data=r, x='method', y='timing')ax.tick_params(labelsize=16)[ax.annotate(str(round(p.get_height(),2)), (p.get_x() + 0.2, p.get_height() + 5)) for p in ax.patches]ax.set_xlabel('索引方法', size=20)ax.set_ylabel('计时(微秒)', size=20)

I have identified one pandas command

timeseries.loc[z, x] = y

to be responsible for most of the time spent in an iteration. And now I am looking for better approaches to accelerate it. The loop covers not even 50k elements (and production goal is ~250k or more), but already needs a sad 20 seconds.

Here is my code (ignore the top half, it is just the timing helper)

def populateTimeseriesTable(df, observable, timeseries):
    """
    Go through all rows of df and 
    put the observable into the timeseries 
    at correct row (symbol), column (tsMean).
    """

    print "len(df.index)=", len(df.index)  # show number of rows

    global bf, t
    bf = time.time()                       # set 'before' to now
    t = dict([(i,0) for i in range(5)])    # fill category timing with zeros

    def T(i):
        """
        timing helper: Add passed time to category 'i'. Then set 'before' to now.
        """
        global bf, t 
        t[i] = t[i] + (time.time()-bf)
        bf = time.time()        

    for i in df.index:             # this is the slow loop
        bf = time.time()

        sym = df["symbol"][i]
        T(0)

        tsMean = df["tsMean"][i]
        T(1)

        tsMean = tsFormatter(tsMean)
        T(2)

        o = df[observable][i]
        T(3)

        timeseries.loc[sym, tsMean] = o
        T(4)

    from pprint import pprint
    print "times needed (total = %.1f seconds) for each command:" % sum(t.values())
    pprint (t)

    return timeseries

With (not important, not slow)

def tsFormatter(ts):
    "as human readable string, only up to whole seconds"
    return time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime(ts))

. .

--> The to-be-optimized code is in the for-loop.

(T, and t are just helper function & dict, for the timing.)

I have timed every step. The vast majority of time:

len(df.index)= 47160
times needed (total = 20.2 seconds) for each command:
{0: 1.102,
 1: 0.741,
 2: 0.243,
 3: 0.792,
 4: 17.371}

is spent in the last step

timeseries.loc[sym, tsMean] = o

I have already downloaded and install pypy - but sadly, that doesn't support pandas yet.

Any ideas how to speed up populating a 2D array?

Thanks!


Edit: Sorry, hadn't mentioned - 'timeseries' is a dataframe too:

timeseries = pd.DataFrame({"name": titles}, index=index)

解决方案

UPDATE: starting from Pandas 0.20.1 the .ix indexer is deprecated, in favor of the more strict .iloc and .loc indexers.

=====================================================================

@jezrael has provided an interesting comparison and i decided to repeat it using more indexing methods and against 10M rows DF (actually the size doesn't matter in this particular case):

setup:

In [15]: df = pd.DataFrame(np.random.rand(10**7, 5), columns=list('abcde'))

In [16]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10000000 entries, 0 to 9999999
Data columns (total 5 columns):
a    float64
b    float64
c    float64
d    float64
e    float64
dtypes: float64(5)
memory usage: 381.5 MB

In [17]: df.shape
Out[17]: (10000000, 5)

Timing:

In [37]: %timeit df.loc[random.randint(0, 10**7), 'b']
1000 loops, best of 3: 502 s per loop

In [38]: %timeit df.iloc[random.randint(0, 10**7), 1]
1000 loops, best of 3: 394 s per loop

In [39]: %timeit df.at[random.randint(0, 10**7), 'b']
10000 loops, best of 3: 66.8 s per loop

In [41]: %timeit df.iat[random.randint(0, 10**7), 1]
10000 loops, best of 3: 32.9 s per loop

In [42]: %timeit df.ix[random.randint(0, 10**7), 'b']
10000 loops, best of 3: 64.8 s per loop

In [43]: %timeit df.ix[random.randint(0, 10**7), 1]
1000 loops, best of 3: 503 s per loop

Results as a bar plot:

Timing data as DF:

In [88]: r
Out[88]:
       method  timing
0         loc   502.0
1        iloc   394.0
2          at    66.8
3         iat    32.9
4    ix_label    64.8
5  ix_integer   503.0

In [89]: r.to_dict()
Out[89]:
{'method': {0: 'loc',
  1: 'iloc',
  2: 'at',
  3: 'iat',
  4: 'ix_label',
  5: 'ix_integer'},
 'timing': {0: 502.0,
  1: 394.0,
  2: 66.799999999999997,
  3: 32.899999999999999,
  4: 64.799999999999997,
  5: 503.0}}

Plotting

ax = sns.barplot(data=r, x='method', y='timing')
ax.tick_params(labelsize=16)
[ax.annotate(str(round(p.get_height(),2)), (p.get_x() + 0.2, p.get_height() + 5)) for p in ax.patches]
ax.set_xlabel('indexing method', size=20)
ax.set_ylabel('timing (microseconds)', size=20)

这篇关于pandas df.loc[z,x]=y 如何提高速度?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

本站部分内容来源互联网,如果有图片或者内容侵犯您的权益请联系我们删除!

相关文档推荐

python arbitrarily incrementing an iterator inside a loop(python在循环内任意递增迭代器)
Joining a set of ordered-integer yielding Python iterators(加入一组产生 Python 迭代器的有序整数)
Iterating over dictionary items(), values(), keys() in Python 3(在 Python 3 中迭代字典 items()、values()、keys())
What is the Perl version of a Python iterator?(Python 迭代器的 Perl 版本是什么?)
How to create a generator/iterator with the Python C API?(如何使用 Python C API 创建生成器/迭代器?)
Python generator behaviour(Python 生成器行为)