java - Redis performance on AWS EC2 Micro Instance -
i have made funny observation on redis instance deployed on aws ec2 micro instance ( test environment)
i measuring execution times of various operations have hit redis. summarise, execution times ( average) shown below:
jedis -> redis connection 63 milliseconds read of top element in list using lrange(<listname>,0,1) 44 milliseconds read of entire elements of set 5ms iteration on entire set space 60ms( set space approx 130 elements) iteration on subset of elements of set 5ms ( subset element size 5)
now worrying me first 2 operations ( connection , extraction of top element in list).
for connection, code shown below:
jedis redis= new jedis("localhost");
and extraction of top element in list:
string currentdate = redis.lrange(holderdate,0,1).get(0);
now redis lrange
command documentation:
time complexity: o(s+n) s start offset , n number of elements in specified range.
now code s 0 , n 1.
my question is: causing these execution times these trivial operations.
are there characteristics of ec2 micro instance adversely affect performance of these operations.
some key information on redis deployment:
redis_version:2.4.10 used_memory:2869280 used_memory_human:2.74m used_memory_rss:4231168 used_memory_peak:2869480 used_memory_peak_human:2.74m mem_fragmentation_ratio:1.47
thanks in advance.
are there characteristics of ec2 micro instance adversely affect performance of these operations.
the amazon ec2 instance type t1.micro
unique , heavily throttled definition, see micro instances:
micro instances (t1.micro) provide small amount of consistent cpu resources , allow increase cpu capacity in short bursts when additional cycles available. they suited lower throughput applications , websites require additional compute cycles periodically. [emphasis mine]
the latter correct in principle, amount of throttling catches many users surprise - while exact algorithm isn't specified, documentation explains , esp. illustrates general strategy , effect pretty well, in practice seems yield around ~97% called steal time once throttling kicks in, see section when instance uses allotted resources specifically:
we expect application consume amount of cpu resources in period of time. if application consumes more instance's allotted cpu resources, we temporarily limit instance operates @ low cpu level. if instance continues use of allotted resources, performance degrade. increase time limit cpu level, increasing time before instance allowed burst again. [emphasis mine]
this renders performance tests mood indeed, didier spezia rightly commented already. please note while other ec2 instance types may exhibit steal time (which general artifact of virtualization platforms, physical cpus might shared various virtual machines), respective patterns more regular far in case, performance tests possible in principle, though following constraints apply in general:
- you need run tests on number of instances @ least account varying amount of steal time due random cpu loads on neighboring virtual machines
- you shouldn't run benchmarking application on same virtual machine benchmarked 1 in general, impacts result
Comments
Post a Comment