WPF datagrid EF DATABASE UPDATE

hello someone help
am getting null exception on foreign table value with navigation property already assigned. trying to update in datagrid,
       var ctx = new SomeEntities1();
        TblStudentMission smm = new TblStudentMission();
         TblStudentMission sm = e.Row.DataContext as TblStudentMission;
  smm.Remark = sm.Remark;
                    smm.End_Mission = sm.End_Mission;
                    smm.EOD_Mission = sm.EOD_Mission;
                    smm.StaffId = sm.StudentId; //???null as sm.staffid is null
                    smm.TblMission.Mission_Name = sm.TblMission.Mission_Name; // ???null as tblmissionis null
                    sel.ctx1.TblStudentMission.Add(smm);tblstudentmission has missionidok as foreign key
Beautiful Distractions...

hello magnus 
thanks for the response, i wonder why TblMission property would be null , all other properties get assigned from the datagrid row as per binding.  only difference is its a foreign table relation property.
Beautiful Distractions...
Is this model generated from the database?
Put [Include] on the relation.
You could do that in a metadata buddy class if you're likely to generate the model again.
Google buddy class if you're wondering what I'm on about.
It's a standard pattern.
Or you could add that directive when you read it.
But Magnus is correct.
It's in your own interest to ask questions in a forum specialises in the subject you are asking about.
Hope that helps.
Recent Technet articles:
Property List Editing;  
Dynamic XAML

Similar Messages

  • How to stop or reduce the RAM usage as the Wpf Datagrid is Continuosly being updated by background Worker?

    Hi,
    I am developing a packet sniffer application, I am getting a packets from adapter and am updating the information in the Wpf Datagrid using a Background worker. it is a continuous process. So if run this application for hours, after 5 or 6 hours I am getting
    a "Sytem.OutofMemoryException" as the RAM being full. Now my requirement is, the usage of RAM should have a some limit(suppose 750MB) for my application, once application reaches this, the usage of RAM should not increase and at the same time I should
    not get "OutofmemoryException". I mean application should free cached memory of starting Rows of DataGrid, means I should not display the initial Rows and I should display the latest entries. how can I achieve this?
    thanks,
    EDIT:
    Here I am using DataGrid.Items.Add() method to add the entries into the datagrid.
    How can I change my code to Observablecollection items now?

    You could remove entries from the ItemsSource collection of the DataGrid once the number of items reaches a certain threshold of items.
    If you for example use an ObservableCollection<T> as the ItemsSource of the DataGrid you could handle its CollectionChanged event and remove the oldest item when there are a total of say over 5000 (you may of course increase this value in your
    partiuclar application of you require to be able to display more than 5000 items in the DataGrid) items in it. This will make sure that there will never by any more than 5000 items kept in memory:
    const int MaxCount = 5000;
    public MainWindow()
    InitializeComponent();
    ObservableCollection<string> collection = new ObservableCollection<string>();
    collection.CollectionChanged += (ss, ee) =>
    if (collection.Count > MaxCount)
    collection.RemoveAt(0);
    yourDataGrid.ItemsSource = collection;
    Since there is no good way to determine exactly how much physical memory the objects in the ObservableCollection<T> occupy so you better choose a maximum
    number of items to be kept in memory (like MaxCount = 5000 in the sample code above) and then remove the oldest one when the number of items reach this maximum.
    Hope that helps.
    Please remember to close your threads by marking helpful posts as answer and then start a new thread if you have a new question.

  • EF database add new item from WPF datagrid

     var ctx = new SomeEntities1();
            TblStudentMission smm = new TblStudentMission();
             TblStudentMission sm = e.Row.DataContext as TblStudentMission;
      smm.Remark = sm.Remark;
                        smm.End_Mission = sm.End_Mission;
                        smm.EOD_Mission = sm.EOD_Mission;
                        smm.StudentId = sm.StudentId; //???null as sm.staffid is null
                        smm.TblMission.Mission_Name = sm.TblMission.Mission_Name; // ???null as tblmissionis null
                        sel.ctx1.TblStudentMission.Add(smm);tblstudentmission has missionidpk as foreign key from tblmission
    Beautiful Distractions...

    Need help on how to approach this..adding new item to WPF datagrid, in Roweditending handler, i instantiate my entity to the datacontext of datagrid as below
    var s = new TblStudentMission var sm = e.Row.DataContext as TblStudentMission;
    problem is TblStudentMission has a foreign column MissionName that needs to be filled from datagrid entry but (sm.TblMission.MissionName) comes up as null,anyway of making it load. some type of eager loading?or any other approach welcome
    Beautiful Distractions...

  • Locking issue in workflow with conseutive database update

    Dear Workflowers,
    We are in ECC 5.0 and release 6.40. We went live for SAP in February and we are currently using workflow in PLM module for DMS and ECM.
    We have been facing this locking issue randomly happened in our production and quality system. The error from workflow log is "Document XXXX is locked by WF-BATCH". I have two steps in workflow one is to update the document user( from originator to editor with custom BO "zdraw" new method "setuser") and the next step is to update the document status( BO "zdraw" "setstatus" method which inherited form standard BO "draw").  
    I have tried to use "wait" (1st try) , statements  "BAPI_DOCUMENT_ENQUEUE", "BAPI_DOCUMENT_DEQUEUE" (2nd try) and  "Commit work and wait" (3rd try) to add one step in between, however the issue remains.
    The other question I had was we need to write "commit work" when we use BAPI to perform database update in the ABAP program. But I don't see "commit work" in the method of BO(for example "setstatus" in "draw" object) which performs database update. How does workflow perform DB update properly without "commit work" by referencing standard method?
    Could anyone please share your expertise with the issue I am facing?
    Thank you in advance,
    Merta

    Hi Merta,
    Regarding COMMITs: theoretically you should never use COMMIT statements because the Workflow runtime handles that - the transaction of executing the task is the LUW, not your method. By adding COMMIT WORK you are also committing the workflow task execution.
    In practice however there are the occasional exceptions where something just won't work without an explicit commit - but the theory remains that you should always try it without.
    Regarding your problem, the one way to be certain that a DB update is complete is to use a terminating event - either through change documents or status management.
    Failing that, you can write a wrapper method for SETSTATUS that does something like:
    do 10 times.
      try to lock it.
      if success.
        unlock.
        swc_call_method self 'SetStatus' container.
        set success flag.
      else.
        wait up to 3 seconds.
      endif.
    enddo.
    if no success, raise exception.
    Cheers,
    Mike

  • How to "roll back" the model if a database update fails

    I'd like to know if there's a way to "roll back" the model (the values in the managed beans) if a service call to a database fails during the Invoke Application phase. My understanding of the JSF lifecycle is that the values in the model are updated before a service call to a database would happen. If the database update operation fails, it seems to me that the data in the model and the database would be inconsistent.
    For example, say I have a ridiculously simple application that updates a names database. The fields are id, last name, first name. The application has a simple view that allows the user to update a name in the database. The user updates the last name field for a particular record from "Doe" to "Smith" and clicks "submit". The request goes through the lifecycle, the model values are updated from the web form during the Update Model Values phase, then the call to update the database fails during the Invoke Applications phase (the database connection failed). When the Render Response phase completes, the UI would display an error message where I put <h:messages> but the last name value in the UI would show "Smith" instead of "Doe". Is there something I can do to roll the model value back to "Doe" if the database call fails?
    I know that I could redirect to a technical error page, but I'd like to avoid it unless it is the best thing to do.
    I'd appreciate any advice you can offer.
    Thanks.
    - Luke

    I'd like to know if there's a way to "roll back" the model (the values in the managed beans) if a service call to a database fails during the Invoke Application phase. My understanding of the JSF lifecycle is that the values in the model are updated before a service call to a database would happen. If the database update operation fails, it seems to me that the data in the model and the database would be inconsistent.
    For example, say I have a ridiculously simple application that updates a names database. The fields are id, last name, first name. The application has a simple view that allows the user to update a name in the database. The user updates the last name field for a particular record from "Doe" to "Smith" and clicks "submit". The request goes through the lifecycle, the model values are updated from the web form during the Update Model Values phase, then the call to update the database fails during the Invoke Applications phase (the database connection failed). When the Render Response phase completes, the UI would display an error message where I put <h:messages> but the last name value in the UI would show "Smith" instead of "Doe". Is there something I can do to roll the model value back to "Doe" if the database call fails?
    I know that I could redirect to a technical error page, but I'd like to avoid it unless it is the best thing to do.
    I'd appreciate any advice you can offer.
    Thanks.
    - Luke

  • To perform database update in a module with AT EXIT-COMMAND addition

    Dear All,
    I have a function code 'EXIT' with function type 'E'. When this function code is triggered, my screen should close.
    In the module that handles this function code (defined with AT EXIT-COMMAND addition), I will prompt the user whether he/she want's to save the data before exiting with the POPUP_TO_CONFIRM_STEP function module. The text message in the dialog box is "Do you want to save before exiting?".
    When the user wants to save the data, a simple UPDATE statement will be executed to write the data on screen to database.
    The problem here is since the module is defined with AT EXIT-COMMAND addition, the data on screen won't be copied to their corresponding variable on the code (correct me if I'm wrong). Therefore, even though database update is performed, the data written to database are no different that the original.
    How to perform database update in a module with AT EXIT-COMMAND addition?
    or
    Is it even a "custom" or a "good practice" to prompt user to save data before exiting?
    Thanks in advance,
    Haris

    With an exit command, if there's anything that would be lost, I would prompt "Data will be lost, do you wish to continue?". If they do, the database is not updated, if they don't, they stay in the transaction.
    This is because the exit command runs before validation. So how can you know the data is correct?
    If you have a button to leave the transaction that isn't an exit command, then you could prompt to save instead. There the choices should be - quit without saving, save and quit, don't quit.
    Doing a database update in an exit command is not a good idea.
    matt
    Edited by: Matt on Mar 15, 2011 11:53 AM

  • Multiple database updates vesus Tansaction

              Hi,
              I need some help from you great minds. Here us what I am trying to accomplish:
              I have a message driven bean which does a multiple update calls to an Oracle database.
              I want to commit all the db updates only at the end (after all update calls execute
              okay) and if there occurs any problem in any of the update calls, I want to rollback
              all the previously successful update calls I have already made. I am using container
              managed transaction via a mdb.
              I tried to use UserTransaciom.setRollbackOnly() call but that did not completely
              help. This call rolled back the message altogether. All I wanted to do is, if
              there occurs any error during a database update, rollback just the database changes
              and just throw away the message. Is there a way I can just rollback the database
              changes? Any suggestions?? please. Thanks
              

    If I understand correctly what you want to do, is the purpose to do some task or run some script in multiple databases on the same server?
    If so, this is done easily by listing the database (sids) in a file and reading the file in a loop statement.
    In my case, I simply create a file on the server called localsids. I keep this in /var/opt/oracle directory.
    Then, in my script, I set:
    SIDFILE='/var/opt/oracle/localsids'
    NEWPASS=`cat $HOME/.xlh/sys`
    # This loop reads through the 'sidlist' and then looks for a password
    # stored in a separate directory for each sid, but if individual
    # directories do not exist, then it uses the standard system password.
    # It then opens a sqlplus session for each sid (as it loops through the
    # sidfile and executes some sql statement(s), or executes a sql script.
    cat $SIDFILE | while read SID
    do
    ORACLE_SID=$SID
    export ORACLE_SID
    echo $SID # this is only for my own verbose purposes
    sqlplus -s system/manager@$SID <<EOF > /tmp/chg_passwd_${SID}.sql
    alter user system identified by $NEWPASS
    alter user sys identified by $NEWPASS
    EOF
    done
    exit
    # In the above example, i am changing the sys and system passwords for all databases listed in the localsids file.
    Hope this helps...
    ji li
    Message was edited by: ji li to simplify the example...
    I have simplified the above example to hardcode the system password into this script, however, normally I would never do this in real practice. This is just as an example to simplify how to run a loop to run a common script or sql statement in each database.

  • CcBPM, Async Messages and database update performance

    We have several business processes defined and running in XI 3.0. Some are being invoked as web services.
    Our performance/scalability is extremely poor. We believe the problem is the database, which apparently is being updated at a phenomenal rate. The database disk I/O is exceptionally high, and has been tracked to writing to the database container/table files and tranaction logs. The database has been carefully tuned, and we are rushing to migrate the db to a higher capacity machine.
    It would seem, based on documentation, that the database updates are a result of the asynch message interfaces we are using through the business processes. 
    The first question is are we correct in the above statement?
    The second question is are the db updates being done under a transactional scope, if so what is the isolation level of the update, and can we change the isolation level?
    Third question, if the updates are due to the asynch message interfaces, can we use synch message interfaces and reduce the database work? If so, how?
    Finally,

    I don't think you'll find a documented general number of application variables you can have on a single server. But I think you could get away with 400 variables without any trouble.
    Where you might run into trouble, though, is when you have multiple concurrent requests trying to read and change these values. You'll have to ensure that you single-thread write access to these variables using CFLOCK.
    Dave Watts, CTO, Fig Leaf Software
    http://www.figleaf.com/
    http://training.figleaf.com/
    Fig Leaf Software is a Veteran-Owned Small Business (VOSB) on
    GSA Schedule, and provides the highest caliber vendor-authorized
    instruction at our training centers, online, or onsite.

  • Constructing Database Update DML Statements

    This is a very common problem I am sure but want to know how others handle it. I am creating a web application via Servlets and JSPs. I am working with Oracle 10g.
    Here's the problem/question.
    When I present an html form to a user to update an existing record, how do I know what has changed in the record to create the dml statement? I could just update the entire record with the values supplied via the POST data on submit but that does not seem right since maybe only 1 out of 10 fields actually changed. This would also write needless audit and logging information to the database. I have thought about comparing the Request parameters sent with the post with the original java bean used to populate the form in the first place by adding it back to the request as an attribute. Is this the only way to do it or is there a better way?
    Thanks everyone,
    -Brian

    What database drivers are you using? I recall there being some bug in
    sp2 that caused delayed transaction commits for a very specific
    combination of transaction settings and database drivers.
    I think it was third-party type II Oracle drivers and local
    transactions, but I'm not sure.
    In any case, I'd contact support as there is a patch available.
    David
    Gurjit wrote:
    Hi all,
    I have posted something of this sort earlier too. The problem is that
    the database is not reflecting what has been updated using queries from the
    application.
    THe structure of the application is as follows.
    DB = Oracel 8.1.6
    Web Server = iPlanet 4.0
    App Server = iPlanet 6.0 sp2
    The database is connected to via the EJB's. No Database transactions are
    being used except for Bean transactions which are container managed. The
    setAutoCommit flag is set to true. Each query is a transaction. The jsp's
    call these ejb's through wrapper classes. As stated earlier the database
    updates (Inserts, update, delete statements) are not getting reflected in
    the database immediately. The updates happen after a gap of about 7-10
    minutes. Why is this sort of behaviour coming up. This is almost like batch
    updates happening on the database. Is there a setting in IPlanet for
    removing this kind of problem.
    Regards,
    Gurjit

  • Database update failed for some organizations when installing Update Rollup 1 for Microsoft Dynamics CRM 2013 Service Pack 1

    Hi, 
    We get the following error in the logfile when we try to install the latest update for CRM: (KB2953252)
    Does anyone know how to fix this problem? 
    09:10:41|   Info| Database update install failed for orgId = 35e7ca08-43fb-4440-ba18-acfc3f42e115.  Continuing with other orgs.  Exception: System.ArgumentNullException: Value cannot be null.
    Parameter name: type
       at System.Activator.CreateInstance(Type type, Boolean nonPublic)
       at System.Activator.CreateInstance(Type type)
       at Microsoft.Crm.Setup.Database.DllMethodAction.Execute(Guid organizationId)
       at Microsoft.Crm.Setup.Database.DatabaseInstaller.ExecuteReleases(ReleaseInfo releaseInfo, Boolean isInstall)
       at Microsoft.Crm.Setup.Database.DatabaseInstaller.Install(Int32 languageCode, String configurationFilePath, Boolean upgradeDatabase, Boolean isInstall)
       at Microsoft.Crm.Setup.Database.DatabaseInstaller.InstallUpdate(String configurationFilePath, Boolean upgradeDatabase)
       at Microsoft.Crm.Setup.Common.Update.DBUpdateDatabaseInstaller.OrgInstall(ArrayList orgIdArray)

    Hi, 
    We get the following error in the logfile when we try to install the latest update for CRM: (KB2953252)
    Does anyone know how to fix this problem? 
    09:10:41|   Info| Database update install failed for orgId = 35e7ca08-43fb-4440-ba18-acfc3f42e115.  Continuing with other orgs.  Exception: System.ArgumentNullException: Value cannot be null.
    Parameter name: type
       at System.Activator.CreateInstance(Type type, Boolean nonPublic)
       at System.Activator.CreateInstance(Type type)
       at Microsoft.Crm.Setup.Database.DllMethodAction.Execute(Guid organizationId)
       at Microsoft.Crm.Setup.Database.DatabaseInstaller.ExecuteReleases(ReleaseInfo releaseInfo, Boolean isInstall)
       at Microsoft.Crm.Setup.Database.DatabaseInstaller.Install(Int32 languageCode, String configurationFilePath, Boolean upgradeDatabase, Boolean isInstall)
       at Microsoft.Crm.Setup.Database.DatabaseInstaller.InstallUpdate(String configurationFilePath, Boolean upgradeDatabase)
       at Microsoft.Crm.Setup.Common.Update.DBUpdateDatabaseInstaller.OrgInstall(ArrayList orgIdArray)

  • Multiple Database Updates

    Hi
    In development environment I have many branches(copies) of a database.
    For every change (ddl, dml) i have to login every database manually to execute statements, Is there a client tool that support multiple database update or if any expert ever make a customized routine for that?
    And ideally as i have different named branches i wish i can define the scope of change too i.e. change execute on databases of "abc" & "pqr" branches only.
    Wishes

    If I understand correctly what you want to do, is the purpose to do some task or run some script in multiple databases on the same server?
    If so, this is done easily by listing the database (sids) in a file and reading the file in a loop statement.
    In my case, I simply create a file on the server called localsids. I keep this in /var/opt/oracle directory.
    Then, in my script, I set:
    SIDFILE='/var/opt/oracle/localsids'
    NEWPASS=`cat $HOME/.xlh/sys`
    # This loop reads through the 'sidlist' and then looks for a password
    # stored in a separate directory for each sid, but if individual
    # directories do not exist, then it uses the standard system password.
    # It then opens a sqlplus session for each sid (as it loops through the
    # sidfile and executes some sql statement(s), or executes a sql script.
    cat $SIDFILE | while read SID
    do
    ORACLE_SID=$SID
    export ORACLE_SID
    echo $SID # this is only for my own verbose purposes
    sqlplus -s system/manager@$SID <<EOF > /tmp/chg_passwd_${SID}.sql
    alter user system identified by $NEWPASS
    alter user sys identified by $NEWPASS
    EOF
    done
    exit
    # In the above example, i am changing the sys and system passwords for all databases listed in the localsids file.
    Hope this helps...
    ji li
    Message was edited by: ji li to simplify the example...
    I have simplified the above example to hardcode the system password into this script, however, normally I would never do this in real practice. This is just as an example to simplify how to run a loop to run a common script or sql statement in each database.

  • SAP database updations

    Hi all
         I just want to know whether we can do the SAP database updations through a JAVA program.If we can do this then please explain to me how we can do this.I just want to access the data in the database(SAP) and do the updations whenever we need it through the JAVA program.
    Thank You.
    Regards
    Giri

    Giri,
    Ya...Java APP sents data to XI and XI needs to insert the data into the database.
    Before the next round of data is sent by the Application, Xi should sent back info on the status of the records.
    Is this what you want?
    This will be like I have pointed earlier possible without a BPM. But, make the call from the sending application Synchronous. And then map the JDBC response to the calling aplication.
    But this updation is only possible through XI.
    If i got the requirement wrong can you let me know in more detail, what is it that you are trying?
    Reward if solved

  • AUXILIARY database update using full backup from target database

    Hi,
    I am now facing the problem with how to implement AUXILIARY database update to be consistent with the target database during a certain period (a week). I did a fully backup on our target database everyday using rman. I know it is possible to use expdp to realize it but i want to use the current fully backup to do it. Does anybody has idea or experience with that? Thanks in advance!
    Regards,
    lik

    That's OK. If you don't use RMAN to clone your database. You can create a database just using the cold backup of the primary database simply.
    Important things are
    1) you must catalog all datafiles as image copy level 0 in the cloned database
    RMAN> connect catalog rman/rman@rcvcat (in host 1)
    RMAN> connect target sys/manager@clonedb (in host 2)
    RMAN> catalog datafilecopy
    '/oracle/oradata/CLONE/datafile/abc.dbf',
    '/oracle/oradata/CLONE/datafile/def.dbf',
    '/oracle/oradata/CLONE/datafile/ghi.dbf'
    level 0 tag 'CLONE';
    2) You need to make incrementals of the primary database to refresh the clone database.Make sure that you need to specify a tag for the incremental and the name of tag is the exactly same as the one used step (1).
    RMAN> connect catalog rman/rman@rcvcat (in host 1)
    RMAN> connect target sys/manager@prod (in host 3)
    RMAN> backup incremental level 1 tag 'CLONE' for recover of copy with tag 'CLONE' database format '/backup/%u';
    3) Copy the newly created incrementals (in host 3) to the clone database site (host 2). Make sure the directory must be exactly same.
    $ rcp /backup/<incr_backup> /backup/
    -- rcp <the loc of a incremental in host 3> <the loc of a incremental in host 2>
    4) Apply incrementals to update the clone database. Make sure you provide the tag you specified.
    RMAN> connect catalog rman/rman@rcvcat
    RMAN> connect target sys/manager@clone
    RMAN> recover copy of database with tag 'CLONE';
    5) After update the clone database, then delete the incremental backups and uncatalog the image copies
    RMAN> delete backup tag 'CLONE';
    RMAN> change copy like '/oracle/oradata/CLONE/datafile/%' uncatalog;
    *** As you can see, you can clone a database using any methods. The key is you have to catalog the clone database when you refresh it. After finishing it, then uncatalog..

  • Database updates statistics maintenance plan issue.

    Hi team,
    We are configured one job through maintenance plan that job name is “database update statistics” and database size is 280 Gb, this job executing 13 to 15 hours but job was not finished  still it’s continually running.
    This same job I am running through below script it’s executing within 2 hours.
    Use database
    Go
    Exec sp_updatestats
    What is the main problem if this maintenance plan.
    Note: on this server no jobs and no traffic, only abc_update subpaln1 Job.

    Hello,
    Updating stats for whole database which is 280 G will always result in problem.It is better to run update statistics for tables and indexes which are changed frequently.
    Now to your question few points which sp_Updatestas list in BOL
    http://technet.microsoft.com/en-us/library/ms173804.aspx
    sp_updatestats updates statistics on disabled nonclustered indexes and does not update statistics on disabled clustered indexes.
    sp_updatestats updates only the statistics that require updating based on the rowmodctr information in the sys.sysindexes catalog view, thus avoiding unnecessary updates of statistics
    on unchanged rows.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • Database Update Pending

    Hello Everyone,
    I've recently redistributed the workload for my installation of File Reporter 2.0 so that of the 60-something servers I am scanning, none of the scan workload happens on the engine server. This does seem to have improved the performance of the engine in generating scheduled reports. However, I am still having performance problems on the last phase of scanning -- the point at which the scan says "Database Update Pending". There are scans from the 15th still stuck in this phase (the last completed scans are from the 14th).
    I assume that this is because I am using the internal postgres database on the engine, and that all of the scans trying to check in (from the 15th to today) are causing the database to spin. Which is a pity, because I would have thought that a product like NFR would have some nice means of letting each scan to be processed by the database in batch. Or maybe they do, in which case the problem is that the database is just taking a long time for each individual dataset to be processed.
    Have any of you seen this? Any advice for kicking it along? My engine server is not underpowered -- it has a couple of processors and 12GB of RAM (I really should take that down to 6, actually ...) and right now the CPU shows as nearly idle -- the NFRENgine is using 2% and system idle is using the rest. I'd prefer not to move to an external DB (especially since utilization doesn't seem to be the issue) but I'm willing to try if a case can be made.
    Johnnie Odom
    School District of Escambia County

    Hi Johnnie,
    As Duncan already mentioned, the engine log will be interesting to look at what happens.
    NFR is able to spread the load for the agent-scan however these agents will send their scan info to the Engine where a separate process will import these into the central database. Unfortunately NFR only handles one import at a time so if you have a lot of scan data coming in it could be that it will take some time to get hem all imported.
    Ron

Maybe you are looking for

  • How do I get Lightroom to find photos on a USB Drive when the Drive letter changes?

    THis is probably a simple question but I keep getting stuck.  I ha ve my catalog and some images on my laptop hard drive, but the majority are on an external USB drive.  Sometoimes the drive gets assigned E:, F;, or G:, and when it cahnges, the catal

  • Authorization error while activating DSO

    Hi Gurus, When I am trying to Activate DSO, it is throwing the following error.. Can anyone let me know what type of Authorizzation object should I give to resolve this issue.. You do not have authorization for the DWB object Message no. RSM831 CAUSE

  • Configuration of AirPort Extreme Using Third Party LAN Ethernet Adapter?

    Hello; I am trying to set up my own wireless connection, not without great consternation. Any help would be greatly appreciated. Here is my current situation: Subscribed to DSL service - installation went well, and it works fine on two other computer

  • Export Query SQL multi-linee in XML format and Import XML in SAP

    Good morning Someone know if is possible after Export Query SQL on OITT and ITT1 with thousands rows in XML with values already change thru the query. Then import the XML without using loop in SAP with the above code vb.net oDistinta = DirectCast(m_o

  • Set classpath

    Hi, I want to run a java program from http://www.wutka.com/dtdparserdownload.html but I need to set the class path (with the .jar file). Anyone know how to do this?