I would like to know if I can speedup this code using numpy...
The code is actually running but I know it's possible to do better with np.where, which I've tried but without success :)
For each syn position I want to compare the string on first positions ('000','001'...) with the variable syndrome (casted to string) and get the int on the second position when match
Like if I have a syndrome '100' I will get the 4 so I know I've to flip the 4th bit in a 8 bit codeword
def recover_data(noisy_data):
syn=[['000','none'],['001',6],['010',5],['011',3],['100',4],['101',0],['110',1],['111',2]]
for ix in range(noisy_data.shape[0]):
unflip=0 #index that will be flipped
for jx in range(len(syn)):
if(syn[jx][0] == ''.join(syndrome.astype('str'))):
unflip = syn[jx][1]
if(str(unflip)!='none'):
noisy_data[ix,unflip]=1-noisy_data[ix,unflip]
syndromeand whyjoinit in every iteration instead of doing so once? Since you need to lookup the first filed ofsyn, would you conside using adict?pandasdataframe could also give you fast joins if your two sources of data are separated.