Functional testing of mainline vchiq driver

Michael Zoran mzoran at crowfest.net
Sat Nov 12 10:27:43 PST 2016


On Sat, 2016-11-12 at 19:06 +0100, Stefan Wahren wrote:
> Hi Michael,
> 
> > Michael Zoran <mzoran at crowfest.net> hat am 12. November 2016 um
> > 18:43
> > geschrieben:
> > 
> > 
> > On Sat, 2016-11-12 at 17:27 +0100, Stefan Wahren wrote:
> > > > Phil Elwell <phil at raspberrypi.org> hat am 24. Oktober 2016 um
> > > > 18:16
> > > > geschrieben:
> > > > 
> > > > 
> > > > On 24/10/2016 17:09, Stefan Wahren wrote:
> > > > > Hi,
> > > > > 
> > > > > i want to submit some patches for the vchiq in staging, but i
> > > > > don't know
> > > > > how to test the specific function of the vchiq driver in
> > > > > mainline
> > > > > (i
> > > > > don't want to backport to the downstream kernel).
> > > > > 
> > > > > I've seen some userspace tools like vchiq_test. Which
> > > > > parameter
> > > > > settings
> > > > > are recommend?
> > > > 
> > > > I would recommend "vchiq_test -f" (functional) to check that
> > > > nothing
> > > > fundamental has broken in the protocol, and "vchiq_test -p"
> > > > (ping)
> > > > to
> > > > perform a large number of transfers, including small messages
> > > > and
> > > > bulk data.
> > > > 
> > > > Phil Elwell, Raspberry Pi
> > > > 
> > > 
> > > Since Michael has been working on the ioctls i want to provide a
> > > small coverage
> > > table:
> > > 
> > >                                                 vchiq_test  vchiq
> > > _tes
> > > t
> > >  mmal_vc_diag
> > > ioctl                function                   -f          -
> > > p          stats
> > > ---------------------------------------------------------------
> > > ----
> > > -----------------
> > > CONNECT              vchiq_connect              X           X    
> > >     
> > >    X
> > > SHUTDOWN             vchiq_shutdown             X                
> > >     
> > >    X
> > > CREATE_SERVICE       vchiq_add_service          X           X    
> > >     
> > >    X
> > > REMOVE_SERVICE       vchiq_remove_service       X                
> > > QUEUE_MESSAGE        vchiq_queue_message        X           X    
> > >     
> > >    X
> > > QUEUE_BULK_TRANSMIT  vchiq_queue_bulk_transmit  X           X
> > > QUEUE_BULK_RECEIVE   vchiq_queue_bulk_receive   X
> > > AWAIT_COMPLETION     vchiq_shutdown             X           X    
> > >     
> > >    X
> > > DEQUEUE_MESSAGE      vchi_msg_peek                               
> > >     
> > >    X
> > > GET_CLIENT_ID        vchiq_get_client_id
> > > GET_CONFIG           vchiq_get_config           X           X    
> > >     
> > >    X
> > > CLOSE_SERVICE        vchiq_close_service                    X    
> > >     
> > >    X
> > > USE_SERVICE          vchiq_use_service                      X    
> > >     
> > >    X
> > > RELEASE_SERVICE      vchiq_release_service                  X    
> > >     
> > >    X
> > > SET_SERVICE_OPTION   vchiq_set_service_option   X           X
> > > DUMP_PHYS_MEM        vchiq_dump_phys_mem
> > > LIB_VERSION          vchiq_initialise_fd        X           X    
> > >     
> > >    X
> > > CLOSE_DELIVERED      vchiq_shutdown             X           X    
> > >     
> > >    X
> > 
> > Cool, it looks like the only two not covered are GET_CLIENT_ID and
> > DUMP_PHYS_MEM.
> 
> GET_CLIENT_ID is used by khronos. I don't have no idea which tool
> uses
> DUMP_PHYS_MEM. Under security considerations i suggest to make this
> ioctl a stub
> or at least its function configurable ( CONFIG_VCHIQ_DUMP_PHYS_MEM ).
> 

I suspect if anything it's that vcdbg tool from raspbian that I can't
find the source for.

I have absolutely no problem removing DUMP_PHYS_MEM or putting it under
a different config.  The only issue I have is that this driver plus the
userland mbox driver gives userland fairly unlimited access to the GPU
including uploading arbitrary code into the GPU.  Who know what damage
could be done with that kind of access. 

Perhaps it is best to make a CONFIG_VCHIQ_DUMP_PHYS_MEM config that is
off by default until we track down what is actually using this ioctl.


> > 
> > Did you get this by looking at the source of the test tools or did
> > you
> > add logging to the driver?
> 
> I added a conditional logging because the trace for the ioctls
> generated too
> much output.
> 
> > Any chance this was with my ioctl patch
> > applied?
> > 
> 
> I made this with your patch applied.
> 
> Stefan



More information about the linux-rpi-kernel mailing list