[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] iSCSI problems


  • To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Edwards, Nigel \(Nigel Edwards\)" <nigel.edwards@xxxxxx>
  • Date: Mon, 31 Oct 2005 11:12:11 -0000
  • Delivery-date: Mon, 31 Oct 2005 11:09:22 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcXeC+8cOHX437zcRs6dDnai+auHJw==
  • Thread-topic: iSCSI problems

Hi,
I am having problems getting iSCSI working with xen. I can access
iSCSI drives fine from Xen0 (e.g. untar a 500MB archive and I don't
see problems reported through dmesg or /proc/kmsg). However, if I try
to project them into an instance of XenU as a scsi device to boot the
unprivileged domain I get an oops in Xen0 - often resulting in the
whole machine crashing.  This occurs in the very early stages of boot
of xenU.

I am using linux-iscsi-4.0.2 for iscsi_sfnet.ko
scsi_transport_iscsi.ko is being built in linux-2.6.12-xen0
I downloaded xen-unstable-src.tgz October 25th.

I notice that there appear to be some significant differences between
scsi_transport_iscsi.c in linux-iscsi-4.0.2 and the version in 
linux-2.6.12 kernel.

Example oops below. I have also attached the domain config file. I am
not sure what to do next to get this working.  Any suggestions would
be appreciated. If you have iSCSI working, if you could drop me a note
indicating what code and modules you are using I would be very grateful.

Cheers,
Nigel.


<1>Unable to handle kernel paging request at virtual address f3d55000
<1> printing eip:
<4>c0147bec
<1>*pde = ma 030cb067 pa 000cb067
<1>*pte = ma 00000000 pa 55555000
<1>Oops: 0002 [#1]
<4>PREEMPT
<4>Modules linked in: crc32c md5 iscsi_sfnet scsi_transport_iscsi
sworks_agp agpgart
<4>CPU:    0
<4>EIP:    0061:[<c0147bec>]    Not tainted VLI
<4>EFLAGS: 00010292   (2.6.12.6-xen0)
<4>EIP is at buffered_rmqueue+0x19c/0x340
<4>eax: 00000000   ebx: 00000001   ecx: 00000400   edx: f3d55000
<4>esi: c167aaa0   edi: f3d55000   ebp: 00000000   esp: ecb41dd4
<4>ds: 007b   es: 007b   ss: 0069
<4>Process ifup (pid: 30665, threadinfo=ecb40000 task=f2180040)
<4>Stack: c167aaa0 00000003 00000000 eeda60c0 c0147660 c167aaa0 c050c5c0
00000000
<4>       00000000 000084d0 c0147f2f c050c5c0 00000000 00000010 00000000
00000000
<4>       00000000 00000000 00000000 f2180040 00000010 c050c92c 00000000
c011cbd3
<4>Call Trace:
<4> [<c0147660>] prep_new_page+0x50/0x60
<4> [<c0147f2f>] __alloc_pages+0xcf/0x430
<4> [<c011cbd3>] mm_init+0xa3/0xe0
<4> [<c0116c71>] pte_alloc_one+0x11/0x30
<4> [<c0153ab2>] pte_alloc_map+0x42/0x200
<4> [<c0153d9f>] copy_pte_range+0x2f/0x330
<4> [<c0154137>] copy_page_range+0x97/0xd0
<4> [<c011d0e5>] copy_mm+0x285/0x3d0
<4> [<c05aece0>] BusLogic_ProbeHostAdapter+0xe0/0x170
<4> [<c011db77>] copy_process+0x407/0xe10
<4> [<c011e685>] do_fork+0x75/0x19f
<4> [<c012dbc5>] sys_rt_sigprocmask+0x95/0x140
<4> [<c0107ae1>] sys_fork+0x31/0x40
<4> [<c0109365>] syscall_call+0x7/0xb
<4>Code: 8b 74 24 14 31 ed 89 f6 8d bc 27 00 00 00 00 89 34 24 bf 03 00
00 00 89 7c 24 04 e8 6f 1d fd ff 89 c2 89 c7 b9
00 04 00 00 89 e8 <f3> ab 89 14 24 b9 03 00 00 00 83 c6 20 89 4c 24 04
e8 ae 1d fd
<4> <6>note: ifup[30665] exited with preempt_count 2
<3>scheduling while atomic: ifup/0x00000002/30665
<4> [<c046fa61>] schedule+0x681/0x760
<4> [<c011fcc1>] release_console_sem+0x71/0x190
<4> [<c011fa4f>] vprintk+0x1df/0x330
<4> [<c0470d66>] rwsem_down_read_failed+0xc6/0x1e0
<4> [<c0123860>] .text.lock.exit+0x27/0x87
<4> [<c0122087>] do_exit+0xa7/0x410
<4> [<c0109d65>] die+0x1c5/0x1d0
<4> [<c0117fe4>] do_page_fault+0x3e4/0x65b
<4> [<c011b40a>] __wake_up+0x4a/0xb0
<4> [<c01e8461>] journal_stop+0x171/0x2f0
<4> [<c01da350>] ext3_mark_inode_dirty+0x50/0x60
<4> [<c01ded34>] __ext3_journal_stop+0x24/0x50
<4> [<c014ae56>] __do_page_cache_readahead+0xa6/0x270
<4> [<c01da3d1>] ext3_dirty_inode+0x71/0x90
<4> [<c01da360>] ext3_dirty_inode+0x0/0x90
<4> [<c018eb36>] __mark_inode_dirty+0x116/0x1e0
<4> [<c0124e21>] current_fs_time+0x51/0x70
<4> [<c01096ee>] page_fault+0x2e/0x34
<4> [<c0147bec>] buffered_rmqueue+0x19c/0x340
<4> [<c0147660>] prep_new_page+0x50/0x60
<4> [<c0147f2f>] __alloc_pages+0xcf/0x430
<4> [<c011cbd3>] mm_init+0xa3/0xe0
<4> [<c0116c71>] pte_alloc_one+0x11/0x30
<4> [<c0153ab2>] pte_alloc_map+0x42/0x200
<4> [<c0153d9f>] copy_pte_range+0x2f/0x330
<4> [<c0154137>] copy_page_range+0x97/0xd0
<4> [<c011d0e5>] copy_mm+0x285/0x3d0
<4> [<c05aece0>] BusLogic_ProbeHostAdapter+0xe0/0x170
<4> [<c011db77>] copy_process+0x407/0xe10
<4> [<c011e685>] do_fork+0x75/0x19f
<4> [<c012dbc5>] sys_rt_sigprocmask+0x95/0x140
<4> [<c0107ae1>] sys_fork+0x31/0x40
<4> [<c0109365>] syscall_call+0x7/0xb

Attachment: iSCSI-Disk1
Description: iSCSI-Disk1

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.