我有以下python脚本。在其中,我正在遍历CSV文件的CSV文件,该文件包含成行的会员卡。在许多情况下,每张卡有一个以上的条目。我目前正在遍历每一行,然后使用loc查找当前行中卡的所有其他实例,因此我可以将它们组合在一起以发布到API。但是,我想做的是,完成该帖子后,删除我刚刚合并的所有行,这样迭代就不会再碰到它们了。
那就是我坚持的部分。有任何想法吗?本质上,我想在进行下一次迭代之前从csv中删除card_list中的所有行。这样,即使可能有5行具有相同的卡号,我也只处理该卡一次。我尝试使用
csv = csv[csv.card != row.card]
在循环的最后,认为它可能会重新生成数据帧,而没有任何行,并且一张卡与刚刚处理过的卡相匹配,但是没有用。
import urllib3
import json
import pandas as pd
import os
import time
import pyfiglet
from datetime import datetime
import array as arr
for row in csv.itertuples():
dt = datetime.now()
vouchers = []
if minutePassed(time.gmtime(lastrun)[4]):
print('Getting new token...')
token = get_new_token()
lastrun = time.time()
print('processing ' + str(int(row.card)))
card_list = csv.loc[csv['card'] == int(row.card)]
print('found ' + str(len(card_list)) + ' vouchers against this card')
for row in card_list.itertuples():
print('appending card ' + str(int(row.card)) + ' voucher ' + str(row.voucher))
vouchers.append(row.voucher)
print('vouchers, ', vouchers)
encoded_data = json.dumps({
"store_id":row.store,
"transaction_id":"11111",
"card_number":int(row.card),
"voucher_instance_ids":vouchers
})
print(encoded_data)
number += 1
r = http.request('POST', lcs_base_path + 'customer/auth/redeem-commit',body=encoded_data,headers={'x-api-key': api_key, 'Authorization': 'Bearer ' + token})
response_data = json.loads(r.data)
if (r.status == 200):
print (str(dt) + ' ' + str(number) + ' done. processing card:' + str(int(row.card)) + ' voucher:' + str(row.voucher) + ' store:' + str(row.store) + ' status: ' + response_data['response_message'] + ' request:' + response_data['lcs_request_id'])
else:
print (str(dt) + ' ' + str(number) + 'done. failed to commit ' + str(int(row.card)) + ' voucher:' + str(row.voucher) + ' store:' + str(row.store) + ' status: ' + response_data['message'])
new_row = {'card':row.card, 'voucher':row.voucher, 'store':row.store, 'error':response_data['message']}
failed_csv = failed_csv.append(new_row, ignore_index=True)
failed_csv.to_csv(failed_csv_file, index=False)
csv = csv[csv.card != row.card]
print ('script completed')
print (str(len(failed_csv)) + ' failed vouchers will be saved to failed_commits.csv')
print("--- %s seconds ---" % (time.time() - start_time))
首要的经验法则永远不会改变您迭代的内容。另外,我认为您在上做错了itertuples
。让我们进行分组:
for card, card_list in csv.groupby('card'):
# card_list now contains all the rows that have a specific cards
# exactly like `card_list` in your code
print('processing, card)
print('found', len(card_list), 'vouchers against this card')
# again `itertuples` is over killed -- REMOVE IT
# for row in card_list.itertuples():
encoded_data = json.dumps({
"store_id": card_list['store'].iloc[0], # same as `row.store`
"transaction_id":"11111",
"card_number":int(card),
"voucher_instance_ids": list(card_list['voucher']) # same as `vouchers`
})
# ... Other codes
本文收集自互联网,转载请注明来源。
如有侵权,请联系[email protected] 删除。
我来说两句