| 
    
 [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 10/13] optee: add support for RPC commands
 On 10/09/18 19:14, Volodymyr Babchuk wrote: I think you need to modify your habit because I am pretty sure Linux folks would not allow:On 10.09.18 18:34, Julien Grall wrote:On 03/09/18 17:54, Volodymyr Babchuk wrote:static struct shm_buf *allocate_shm_buf(struct domain_ctx *ctx, uint64_t cookie, int pages_cnt)@@ -704,6 +732,28 @@ static bool copy_std_request_back(struct domain_ctx *ctx, struct shm_rpc *rpc = find_shm(...); if ( rpc ) ... The correct way is: struct shm_rpc *rpc = find_shm(...); if ( rpc ) Notice the newline between struct and if. No, this, actually is part of the protocol. OP-TEE can ask to allocate more shared buffers, one per RPC return. Please a give link to the spec and the paragraph. call->non_contig[x] is needed to hold list of pages until OP_TEE consumes them. The it can be freed and reused to allocate next buffer.Consider this: What is x? 1. OP-TEE issues RPC "allocate buffer" 2. NW returns list of pages 3. Mediator translates and stores address in non_contig[x] 4. OP-TEE begins to consume this list 5. IRQ arrives and OP-TEE forced to break the work 6. Mediator receives control back, but it should not free non_contig[x], because it is not sure of OP-TEE finished reading from it 7. Xen/guest handles the IRQ and returns control back to OP-TEE 8. OP-TEE finishes processing this buffers and asks for another one 9. NW returns list of pages for the next buffer 10. At this point mediator is sure that OP-TEE finished processing old non_contig[x], so it can free it and allocated another. Thank you for the description of the protocol. However, it is still does not explain why you decided to free MAX_NONCONTIG_ENTRIES - 1. Why not 0 or 1 or n? Overall, it feels like to me you want to write more documentation about how the mediator is supposed to work. [...] Please try to avoid changing the coding style in different patch. But this one is wrong.Yep :( this is the artifact from splitting the big patch.gprintk(XENLOG_WARNING, "Failed to allocate shm_rpc object\n");- ptr = 0; - } - else - ptr = mfn_to_maddr(shm_rpc->guest_mfn); + ptr = ~0;Can you explain why you change from 0 to ~0?I had to introduce this in the original patch, actually. What do you mean? Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel 
 
 
  | 
  
![]()  | 
            
         Lists.xenproject.org is hosted with RackSpace, monitoring our  |