Windows Dev to LINUX UAT deployment issue - can't log on.

Hi All,
I've recently deployed my dev dashboard/webcat to our UAT server ( for the first time ) along with changing the instanceconfig.xml to point to the new webcat directory. when I log in ( as Administrator ), I get the following error on the web page:
Unable to Log In
Invalid Handle Of 'PN3saw13security_impl5GroupE' Exception
Error Details
Error Codes: Q4NU7XSN
The sawlog0.log file contains the following:
Type: Error
Severity: 30
Time: Wed Mar 17 14:04:41 2010
File: project/webbedrock/handle.cpp Line: 21
Properties: ThreadID-4066503568
Location:
saw.security.securityimpl.getPermissions
saw.subsystem.security.checkAuthenticationImpl
saw.threadPool
saw.threads
Invalid Handle Of 'PN3saw13security_impl5GroupE' Exception
Can anyone advise on what the problem is? I wondered whether it was a LINUX permissions issue, so tried a chmod -R 777 on the webcat directory.
Thanks in advance

For anyone interested, the problem, now resolved, was in the transfer of the webcat directory.
In the end I ZIPped the webcat directory, transferred the ZIP, then unzipped on the destination server, not copying the web/catalog directory as a series of sub files/folders.

Similar Messages

  • OD issues - can't log in once bound

    I am experiencing some issues with OD in Leopard Server and Client. Specifically:
    - When asked for authentication during binding, the Directory Utility on the client says "attempting to bind" and then stops, leaving no error message but also not binding the client. When disabling authentication for binding, the computer binds.
    - User accounts are configured to require a user to change their password upon logging in the first time. The client clearly connects to the server, authenticates a password against said server, and then prompts me to change the password. However, I cannot set a new password.
    - When creating user accounts after disbling the change password on first login feature, I cannot get the user accounts to log in to the workstation. The log files on the server indicate that authentication is occuring against the OD database, and if I enter the wrong password it generates that error on the server. But the local client will still have a shaking login box.
    Services configured on the server are AFP, DHCP, DNS and Open Directory.
    This is the first time I've ever set up a domain controller so I'm a bit stuck.

    Based on these symptoms:
    +...the regular user takes around 1 minute for the login to fail+
    +...the local client will still have a shaking login box+
    I don't think that you have a home directory problem. Even if you didn't specify a home attribute for the user account, your error or situation would be different: Authentication and loginwindow authorization would succeed, but you'd see an error shortly after that. The error would usually say something like "the home for so-in-so is on an AFP or SMB server and cannot be found." I've also seen the login continue to the Finder (Tiger systems only) where / may be listed as the user's home (without read/write permission for the user, typically).
    Instead, I think you have an authentication problem; the shaking login window indicates that authorization failed entirely, so the problem has to be authentication. Specifically, your situation is probably related to a problem with the SASL (Password Server) database or Kerberos configuration of the server.
    Check Server Admin to verify that all Open Directory processes are running properly: You need LDAP, SASL (Password Server), and (optionally) Kerberos. If you don't want Kerberos, be sure to explicitly destroy the KDC and cleanup properly (directions below).
    As a last resort, you may want to revert your server to Standalone and re-promote to OD Master. Unfortunately, if you've changed your server's hostname or primary IP address, this may be the case. In my experience, the changeip command does a good job of updating mount records and home directory attributes for user accounts, but it does not always correctly update Kerberos information associated with the LDAP domain. In other words, the OD Master may still think that the KDC for your realm is at the previous IP address or DNS hostname.
    *Procedure: Take Down Kerberos, but Keep Open Directory Master*
    Use this whenever you're joining your Open Directory Master system to another directory domain (such as Active Directory), and you want the server to be a member of the other directory server's (domain controller's) Kerberos realm. You can also use this procedure if you simply don't want Kerberos.
    1. Use sso_util to "de-Kerberize" services and to destroy the Kerberos realm, like this:
    *sudo sso_util remove -k -a <directory admin> -p <diradmin's password> -r <KERBEROS.REALM>*
    2. Destroy any items that were left over from the KDC's configuration - do this manually:
    a. Su to root, then navigate to /var/db/krb5kdc and list the contents of the folder with *ls -al*. Delete any files whose name does NOT contain LKDC.
    b. Use Workgroup Manager to destroy any Kerberos entries in /Local/Default/Config in the server's local directory domain.
    c. Destroy the files created by kerberosautoconfig: sudo rm /Library/Preferences/edu.mit.Kerberos*
    For more detail, please see my response to another question: http://discussions.apple.com/thread.jspa?messageID=6294175&#6294175
    *Procedure: Demote Server to Standalone and then re-promote to Open Directory Master*
    Use this when all else fails. All users, groups, computers and computer groups in the shared LDAP domain will be destroyed; all user password information in Password Server and Kerberos will also be lost. You can back up your users, groups, and computers via Workgroup Manager's Export command; this will not, however, preserve the users' passwords.
    1. Use Server Admin to change the role of the Server from Open Directory Master to Standalone.
    2. Follow the Kerberos tear-down instructions (above). For some reason, Server Admin doesn't fully destroy the Open Directory Kerberos realm, so this is necessary.
    3. Verify that the server's hostname and primary IP address are set. Double-check your DNS settings for an A record and PTR (reverse lookup) record for your server.
    4. Use Server Admin to re-promote the Server to Open Directory Master.
    --Gerrit

  • 10.8.4 log-in issues (can't log-in)

    I just updated my MBpro to 10.8.4.   
    I'm lucky I have another computer I can use to write this from. 
    Right now I CANNOT LOG IN.  I'm presented with the log-in screen and when I select my user (I am the only user) and enter my password - screen goes white - gives me a spinning gear - then black and back to the log-in screen. 
    I know it accepts the password because it doesn't do the little shake you get when you enter a password incorrectly. 
    I also cannot get in from the guest side either. 
    I feel like I can spend the next hour trying to log in and get nowhere... 
    thanks!

    #6 Reset User Permissions
    ..Step by Step to fix your Mac

  • How to create Dev, Test and UAT environment of OAS 10g on single Linux box

    Hi
    According to Paul's forms/reports installation thread, i installed standalone versions of Forms & Reports (10.1.2.0.2)services on Linux suse 9. And it is working fine.
    Now my next requirement is that i want to create three environment on my Linux box dev, test and UAT. this one which i have created before i am using that environment as dev.
    Kindly provide me some direction that how can i create test and UAT environment on same machine which should point to different source files and databses.
    1. Do i need to install again standalone forms/reports services twice? if yes then how can i access them?
    2. Is there any setting in existing OAS configuration which can divert me to different sources and databases. i saw something like this somewhere
    http://oas.com:7777/forms90/f90servlet?config=UAT&userid=cg_am2/training@tardist
    bla bla bla.
    Please help.
    JM

    Hi
    Yes if your server has the resources (CPU and memory) of doing so the best thing to do would be to install Dev , Test and UAT in three different ORACLE_HOMES with different port numbers for the Oracle HTTP Server to listen on. There is however a non-technical point to install the UAT environment on a seperate box or to do the UAT testing when Dev and Test processes are not running otherwise this will blur the results of the UAT tests. Create different environment files to source these installations. You could even install three separate standalone webcaches in their own ORACLE_HOME in front of these environments. Keepin mind though that it would be better for availability, ease of management it would be better to install your environments on separate boxes. The config=UAT in the URL points to a forms service for an application called UAT I guess. Unless you have only one application in all the environments you could create forms applications in one ORACLE_HOME, but you would end up with just one environment instead of three. Going for the option where you install the environments on different boxes will save you a lot of headaches.
    cheers

  • Windows App deployment issue

    Hi, 
    I am getting below error while publishing the windows app (.xap) for the deployment.
    Please suggest.. Can I deploy the .xap apps to windows phoe 8 and windows phone 8.1 from Intune .
    Shailendra Dev

    It is from the Windows Store. This is what you need to do
    Gerry Hampson | Blog:
    www.gerryhampsoncm.blogspot.ie | LinkedIn:
    Gerry Hampson | Twitter:
    @gerryhampson

  • Install PT8.53 with Linux Issue: Windows NetMgr and Linux NetMgr

    Folks,
    Hello. I am installing PeopleTools 8.53 Internet Architecture. Database Server is Oracle Dabase 11gR1. OS is Oracle Linux 5. I have installed JDK7, WebLogic 10.3.6, Tuxedo 11gR1 and PeopleTools 8.53 successfully into Oracle Linux 5.
    I have been setting up PeopleTools8.53 Database. Because the Install Wizard has problem, I set up PeopleTools 8.53 Database manually using Oracle starter database instance PT853. I have run the following scripts:
    1) utlspace.sql
    2) dbowner.sql
    3) ptddl.sql
    4) psadmin.sql
    5) psroles.sql
    6) connect.sql
    Then, we need to run Data Mover script in a Windows Client machine to populate PeopleTools Database instance PT853 in Linux Sever machine. I have installed Oracle Database 11gR2 client for 32-bit Windows in my 64-bit Windows XP Virtual Machine. Now, I confront 2 VMs' connection issue as below:
    In Linux Server Machine Net Manager:
    Service Name: PT853
    Connection Type: Database Default
    Protocol" TCP/IP
    Host Name: localhost.localdomain
    Port Name: 1521
    Listener: LISTENER
    Protocol: TCP/IP
    Host: localhost.localdomain
    Port: 1521
    I test the Service PT853 using UserID "SYSADM" and Password "SYSADM". The connection is successful.
    In Windows Client machine, the information in Net Manager is the same, but the connection is not successful. Its details are as below:
    Net Service Name: PT853
    Protocol: TCP/IP
    Host name of Database Machine: localhost.localdomain
    Port Number: 1521
    Database Service Name: PT853
    Connection Type: Database Default
    I test the Service using UserID "SYSADM" and Password "SYSADM" that are the same with Linux, but get this error: TNS: listener does not currently know of service requested in connect descriptor.
    My questions are:
    Do we need to do something to connect Windows XP with Linux at first ? If yes, How to do it ? If no, how to solve the above issue ?
    Thanks.

    Folks,
    Hello. Thanks a lot for replying.
    Regarding PeopleSoft networking, I have done Configuration Manager to enable Data Mover and Application Designer to login into Database Instance PT853.
    Regarding 2 Virtual Machines (Windows XP and Oracle Linux 5) connects with each other, I have done the following:
    First, I follow this tutorial http://www.vmware.com/support/ws5/doc/ws_devices_serial_2vms.html to configure 2 VMs for Windows 7 Host.
    Second, in Windows XP Oracle Database Client Install Directory, the file "tnsnames.ora" has one entry that is the Service Name in Net Manager. In Linux Oracle Database Server install directory, the file "tnsnames.ora" has no entries because I installed Oracle Database Server with the starter Database instance PT853.
    I have tried to test 2 VMs in the way as below:
    In Linux, [user@localhost ~]$ping WindowsHostName
    Its output: unknown host WindowsHostName
    In Windows XP Command Prompt:
    C:\ping localhost.localdomain
    Its output: pinging localhost.localdomain 127.0.0.1 with 32 bytes of data...
    Reply from 127.0.0.1: bytes=32 time=2ms, TTL=128
    It replies a few times and then disconnected by itself. It seems that Windows XP is pinging itself and not Linux Server. The hostname of Linux Server is "localhost.localdomain" as well. It's a kind of confusing to me.
    From the above infromation, we  can see 2 VMs cannot connect with each other. Net Manager in Windows XP Oracle Client cannot connect its Service PT853.
    I don't understand how to connect Oracle Client in Windows with Oracle Database Server in Linux. Can any folk help to solve this issue ?
    Very grateful in advanced.

  • Windows master unable to connect to jvms on Linux systems; Config issue?

    I have a Win2K12 system set up as an admin server to start a workload on slave systems which are dual-booting Win2K12 and RHEL7. This works fine for the Win2K12 setup, but as I move on to the NFS testing, I run into problems connecting to the Linux hosts.
    This SMB workload file, run from the Win2K12 admin server pointing at Win2K12 workers works fine:
    compratio=2.0,dedupratio=5,dedupunit=4k
    hd=r710-01,system=r710-01.storage.spoc,user=administrator,shell=vdbench
    hd=r710-08,system=r710-08.storage.spoc,user=administrator,shell=vdbench
    sd=sd1,host=r710-01,lun=Z:\test1.txt,threads=32,size=50g
    sd=sd8,host=r710-08,lun=Z:\test8.txt,threads=32,size=50g
    wd=wd1,sd=(sd1,sd2,sd3,sd4,sd5,sd6,sd7,sd8),xfersize=4k,rdpct=25,seekpct=100,openflags=directio
    rd=run1,wd=wd1,iorate=(250,500,750,1000,max),elapsed=600,interval=5
    This test NFS workload file does not:
    compratio=2.0,dedupratio=5,dedupunit=4k
    hd=r710-08,system=r710-08.storage.spoc,vdbench=/mnt/nfs/vdbench/vdbench,user=root,shell=vdbench
    sd=sd1,host=r710-08,lun=/mnt/nfs/test8.txt,threads=32,size=50g
    wd=wd1,sd=(sd1),xfersize=4k,rdpct=100,seekpct=100,openflags=directio
    rd=run1,wd=wd1,iorate=(250,500,750,1000,max),elapsed=600,interval=5
    IPTables is stopped, and the vdbench rsh daemon is running on R710-08. I've verified that I can start vdbench locally on R710-08 and run it against the NFS mount. I've tried it with the vdbench path set properly for the Windows admin host (C:\vdbench\) and properly for R710-08 (above), as well as the user set to "Administrator" and "root", but in no case can I get it to successfully connect. In all cases, I get this or similar:
    C:\vdbench>vdbench -f linuxtest.txt
    Vdbench distribution: vdbench50402
    For documentation, see 'vdbench.pdf'.
    16:39:59.019 input argument scanned: '-flinuxtest.txt'
    16:39:59.097 *
    16:39:59.097 * In order for Dedup to remember which data patterns have been written and
    16:39:59.097 * which data patterns they can be replaced with, Dedup has activated a
    16:39:59.097 * subset of Data Validation. Data Validation will only be used to keep
    16:39:59.097 * track of data patterns. It will not be used to validate data, unless
    16:39:59.097 * of course specifically requested.
    16:39:59.097 *
    16:39:59.144 Starting slave: C:\vdbench\vdbench SlaveJvm -m 10.241.6.23 -n r710-08.storage.spoc-10-150309-16.39.58.972 -l r710-08-0 -p 5570
    16:40:09.222 Waiting for slave connection: r710-08-0
    16:40:19.395 Waiting for slave connection: r710-08-0
    16:40:20.161
    16:40:20.161 Trying to connect to the Vdbench rsh daemon on host r710-08.storage.spoc
    16:40:20.161 The Vdbench rsh daemon must be started on each target host.
    16:40:20.161 This requires a one-time start of './vdbench rsh' on the target host.
    16:40:20.161 Trying this for 60 seconds only.
    16:40:29.568 Waiting for slave connection: r710-08-0
    16:40:39.740 Waiting for slave connection: r710-08-0
    16:40:46.194
    16:40:46.194 Trying to connect to the Vdbench rsh daemon on host r710-08.storage.spoc
    16:40:46.194 The Vdbench rsh daemon must be started on each target host.
    16:40:46.194 This requires a one-time start of './vdbench rsh' on the target host.
    16:40:46.194 Trying this for 60 seconds only.
    16:40:49.913 Waiting for slave connection: r710-08-0
    16:40:59.210
    16:40:59.210 Terminating attempt to connect to slaves.
    16:40:59.210
    java.lang.RuntimeException: Terminating attempt to connect to slaves.
            at Vdb.common.failure(common.java:306)
            at Vdb.ConnectSlaves.connectToSlaves(ConnectSlaves.java:99)
            at Vdb.Vdbmain.masterRun(Vdbmain.java:730)
            at Vdb.Vdbmain.main(Vdbmain.java:577)
    Bottom line - Is the Windows/Linux mixed environment thing going to work? If so, what am I missing? If not, I need to give up and move on with creation of a second (RHEL7) admin server.
    Re-formatted for clarity. - Mike Baxter

    Thanks for the response, Henk. The linux->linux testing produced some more helpful outputs, that allowed me to track down the firewall as the problem. I'd verified that iptables was not running, but I didn't realize that RHEL7 had gone to firewalld...Which was running.
    It now works linux->linux, windows->linux, and linux->windows, with appropriate workload files.
    This is a semi-static environment intended to be provided to not always very technical users for proof of concept duties, so most of my control necessarily is centralized on a Win2K12 VM. The systems generating the load will be between 1 and 8 physical systems and up to two dozen VMs running Win2K12 or RHEL7, while the workloads will usually be all Windows or all Linux, the "semi-static" nature means I've got to be able to run both concurrently.

  • Windows x64 Deployment Issue works in /debug

    I'm seeing a strange issue where my natively created JavaFX exe fails when I try to run it on a Windows 7/. It was failing silently, so I started it with "/debug" appended afterwards to show what was happening. To my great surprise, the program works fine when run this way!
    When I start it up on my Windows 8 machine, it crashes, complaining about a buffer overflow:
    Problem Event Name:     BEX64
    Has anyone run into this issue?

    To be clear, my App does not take command line arguments. The /debug argument is a Windows thing that forces the exe to spit output to the console. I'm still at a loss for why this would make a difference in whether the app would run or not.
    Can someone point me into some Windows tools to point to the issue for the BufferOverflow? I've been using ProcessMonitor, but without any success.

  • I am having some issues getting OS X to mount the hard drive from my old laptop (Windows 7). Is there anything I can do to mount this?

    I am having some issues getting OS X to mount the hard drive from my old laptop (Windows 7). Is there anything I can do to mount this?
    It is not showing on the Desktop or the left colomn of Finder (Set to do both in Finder Prefernces) and is not showing up under the Disk application under Utilities (Sorry can't remember the name off the top of my head!)

    cjz0r wrote:
     ...I'm starting to think it is most likely NTFS, am I looking at needing an Application to read the NTFS partitions on the drive?
    You should be able to read NTFS formatted drives, just not write to them. For that matter, USB Flash drives generally come formatted NTFS for use with PC's and they have to be able to mount on a Mac in order to reformat them for Mac use. The Windows utilities supplied on those Flash drives will be deleted when the drive is reformatted for Mac use so I routinely copy them to the Mac desktop first before reformatting.
    You might look into something like this http://eshop.macsales.com/item/NewerTech/U3NVSPATA/ if the enclosure is an issue. It looks weird but it works.

  • I tried to use the Browse button on the left pane to go to a server which has my local copy so that I can FTP to ISP, but I get a window saying there is a permissions issue. How do I resolve?

    I tried to use the Browse button on the left pane to go to a server which has my local copy so that I can FTP to ISP, but I get a window saying there is a permissions issue. How do I resolve?

    If it has a cloud icon it means its no longer on your device.  Tapping on the cloud will effectively reinstall the App from scratch to your device. 
    There is no way to remove it from the cloud because its not yours to remove from there. Its the general App repository, you are just given access to it to download content you've already purchased.

  • File adapter Deployment issue

    Hi,
    I have a process that use a file adapter which polles a certain directory.
    I am developing on windows XP and I am deploying to a Linux
    I can't leave the path of this directory as it is since it will not be recognized in the linux server
    How do I solve this issue?
    Amit

    You can specify the Logical Name of the directory from which you want to read the file from. In the BPEL partner link of the bpel.xml file, you then provide the physical parameter value for the Logical Name. This resolves the mapping between the logical directory name and actual physical directory name.
    For example you can specify Logical Name as InputFileDir and in the bpel.xml specify the physical value against the partner link for that file adapter, as following :
    <property name="InputFileDir">C:/ora_home/integration/bpm/samples/tutorials/
    121.FileAdapter/ComplexStructure/InputDir/</property>
    So while development, you can specify the physical directory as in your Windows system, and at deployment you can specify the directory as in your linux system.
    Rahul

  • Jabber For Windows - Calender Integration Option on deployment

    We're about to roll out Jabber for Windows to several hundred clients, and have an issue with the Outlook Integration option setting. Our users are migrating from Lotus Notes to Microsoft Outlook and once migrated to Outlook, will get Jabber for Windows. The problem we have is when installing Jabber for Windows, in many cases it takes IBM Lotus Notes as the default calendar integration, instead of Microsoft Outlook. (Notes is left on users pc as they still need to use Notes to access some backend databases)
    We will have to issue intructions to users to go in to File>Otions>Integration and make sure Microsoft Outlook is selected, but past experience tells us they won't actually read them!
    Does anyone know any way of setting on option on deployment to ensure Microsoft Outlook is selected ?
    Thanks
    Kelvin

    Hi David,
    there is an known issue where default MAPI file can't be opened on some PCs. To confirm this - we would need PRT from computer where issue can be reproduced.
    If you still have same problem then create a problem report (Start menu > Cisco Jabber > Cisco Jabber Problem Report) and attach with this thread. If you are not comfortable to attach report here, then raise a TAC case for further assistance.
    Regards,
    Nebojsa

  • Dual booting pre-installed Windows 8 and Linux?

    I just bought a G780 with Windows 8 pre-installed. Ideally, I would also like to install some flavor of Linux. I've been searching but not finding much info on how difficult or time-consuming this would be, or if there are any special considerations or anything. I've found some guides on dual-booting Windows 8 and Linux but they were all for installing Windows straight from a disk, not anything where it's already pre-installed, and some indicated they thought it might be problematic to use a pre-installed version of Windows. Anyway I was hoping that if anybody has done this on the G780 or even a similar laptop that you could share how it went and any difficulties you encountered. I'm open to using any Linux distro if it'll be easy to install. I would appreciate any info you could share. Thanks.

    I had issues with Windows 8 as pre-installed on my G580, my employer at the time was using Windows 7 only and would not allow upgrades for security purposes.  I deleted the entire drive (including hidden) knowing at some point I would re-install Windows 8 or later from retail disk when needed.
    Linux can resize partitions during the install process, but be careful not to disturb the hidden partition or the windows boot loader in the process.  Linux Mint is the best yet for this purpose, OpenSUSE and Fedora failed, CentOS failed and Ubuntu was not able to boot, but did not "hurt" the windows partitions.
    Good luck.

  • Adobe Acrobat 9 Pro deployment issue

    Hello. I am having a deployment issue with Adobe Acrobat 9 using Altiris. I create the rip which is bassically an image of the install. My facility has pruchased 50 seats for this software so I know we are covered for the users that we have to use this software. The issue that I am having after I make the rip, and deploy it to a machine it asks for the CD key again in order to use. Am I having an issue with my installation or is Adobe put some kind of security into their software so that when you try to make a rip of the installation is asks for teh CD Key again after the install? Is this an issue that can be resolved?

    SOLUTION:
    The issue was somehow related to DPI (START > Settings > Control Pannel > Display > Settings Tab > Advanced button). Eventhough the DPI was set to normal, I switched it to LARGE, restarted the machine, logged in after the reboot, changed it back to normal size, restarted again, logged in once more, checked the Printer Preferences and PRESTO --- a properly displayed window.

  • A very urgent deployment issue about DBAdapter

    Hello All,
    I have a very urgent deployment issue about DBAdapter.
    That DBAdapter is connect to DB2 AS400 Database. I have a developing database (jdbc:as400://server01/TEST) and a production database (jdbc:as400://server01/PROD).
    During developing, I used DBAdapter wizard to create it, and import some tables, and set the Adapter to use jabc/DB2DS as connection information for easily deployment later.
    Then I deploy to Production. I configured Data-source.xml and oc4j-ra.xml rightly; I set DB connection point to production database. But the DBAdapter still write into developing Database.
    I checked the DBAdapter, the imported tables are something like this, TEST.table1, TEST.table2. And there are a lot "TEST" located in DB2Writer_toplink_mapping.xml, DB2Writer.xml, TEST.schema, DB2Writer.table1.ClassDescriptor.xml.
    This TEST is refrer to the TEST in connect String jdbc:as400://server01/TEST.
    I think this might be the reason cause the problem. As to production database, "TEST" should replaced by "PROD". If I changed it manually, I have to change every time when switch between TEST and PROD. And I also don't know if it is safe to do it? (I tried, and bring some toplink mapping problem)
    By the way, for Oracle Database, because we use 2 instances for testing and production with same schema name, and do not have this issue.
    Anyone could help and many thanks.
    Kerr
    Message was edited by:
    Kerr

    Hi Kerr,
    The idea is to set up all connections in the BPEL or ESB services with logical names, e.g. typically of the form eis/DB/MyFinancialSystem or eis/DB/MyLogisticsSystem. This way, you do not have to modify code when deploying it onto different environments that serve different purposes.
    When moving your services through their lifecyle, on every environment you deploy these to you will have the same logical connections configured on each instance, e.g. for DEV, QA, SIT, UAT and PROD. Only, in case of QA the actual physical connection is configured to point to the QA instance of the systems that your services interact with whereas in case of UAT it points to the UAT instance of the same system.
    Maybe your problem is caused by connecting as user "SomeUser" when running the DB Adapter wizard during development and actually selecting objects from a different schema than you used to connect with, e.g. "Test" in your case.
    Hth,
    Sjoerd

Maybe you are looking for