Abstract
Resource-constrained Internet of Things (IoT) devices are executing increasingly sophisticated applications that may require computational or memory intensive tasks to be executed. Due to their resource constraints, IoT devices may be unable to compute these tasks and will offload them to more powerful resource-rich edge nodes. However, as edge nodes may not necessarily behave as expected, an IoT device needs to be able to select which edge node should execute its tasks. This selection problem can be addressed by using a measure of behavioural trust of the edge nodes delivering a correct response, based on historical information about past interactions with edge nodes that are stored in memory. However, due to their constrained memory capacity, IoT devices will only be able to store a limited amount of trust information, thereby requiring an eviction strategy when its memory is full of which there has been limited investigation in the literature. To address this, we develop the concept of the memory profile of an agent and that profile’s utility. We formalise the profile eviction problem in a unified profile memory model and show it is NP-complete. To circumvent the inherent complexity, we study the performance of eviction algorithms in a partitioned profile memory model using our utility metric.
Introduction
Resource-constrained Internet of Things (IoT) devices (henceforth called IoT devices) have previously been used to perform sensing and actuation after receiving commands from a server. However, there is increasing interest in having those resource-constrained devices autonomously performing actions. With the limited resources these devices have, they will not be able to perform expensive computation on their own and will need to interact with other devices to complete some computation, such as by offloading tasks to resource-rich edge nodes (henceforth called edge nodes) [1].
For redundancy multiple edge nodes should be provisioned in the network, which means that there is a need for IoT devices to select which edge node should be interacted with. However, there is a risk that interactions may not be correct or successful, e.g., due to network attacks, edge nodes being overloaded with tasks, or edge nodes intentionally misbehaving. One approach to mitigate this threat is to use a measure of behavioural trust to select which edge node to offload a task to. This measure of behavioural trust can be formed by recording opinions of past interactions with other nodes such that trust models (such as the Beta Reputation System [2]) can use these records to quantitatively assess whether nodes will provide services correctly in the future.
Conclusions and future work
In this work we have investigated how to manage information stored on resource constrained agents that is used to assess behavioural trust in other agents. Work in cache replacement policies are not directly applicable, as deleted information used to evaluate trust cannot be recreated or re-obtained.
We showed that selecting which information to keep and/or to evict is NP-complete, so simulations of 11 different eviction strategies were performed to gauge their performance. Results for our proposed trust model-agnostic utility function, showed that MinNotInother, NotInOther, Chen2016, FiveBand, FIFO, LRU, LRU2 and Random performed well. However, having the MinNotInOther eviction strategy that considers relevant items in other buffers can improve trust information utility when the memory size is small. Finally, we investigated how to size these memory regions under memory constraints via obtaining a solution to the Bounded Knapsack Problem.
Future work will need to consider heuristics to decide if an item should be evicted, taking into account both utility and the cost of re-adding items (e.g., verifying digital signatures [30]), plus other considerations such as data freshness in highly mobile networks. Another area for future work is considering how to specialise the utility function. In this work we have focused on evaluating utility per agent. Instead the utility function could consider the global balance between the information stored on agents, this may involve focusing on storing information on the best and worst performing agents, or by trading-off detail on some agents in order to store cryptographic information on more agents. Finally, distributed storage of trust information could be considered. However, this will incur additional storage costs to ensure the integrity, confidentiality and authenticity of behavioural trust information.