Controllin​g 3 pumps in a network

Hi,
I want to control 3  syringe pump (2 New era ne-500 and a ne-1000) in a network.
Now, I am using a NI driver to control a single pump but I'm not sure if I can use this driver
for the 3 pumps. I'm a beginner in labview and need help.
the manual is attached.
thx a lot
Attachments:
NE-1000_manual_QiS_Prosense.pdf ‏580 KB

Hi,
I may be able to help you - outside of labview.
I sell an application called SyringePumpPro whose purpose is to control multiple New Era syringe pumps via RS232.
As I understand it, the VI is for a single pump, and unsupported.If you want to control 3 New Era Pumps then head over to SyringePumpPro and download the free trail.
I am monitoring these forums to see if there is a large enough market for me to create an up to date VI that supports multiple pumps. If you are interested please leave a message via my site. I have not setup labview and tried the VI that's available - I have a page on my site where I collate what I currently know about the VI from communicating with folks in the same position as you..
If you need to know more about communicating with the pumps - by all means post here or email via my site. I have a lot of experience writing software to communicate with them. As it would happen, today I am testing the next Version of SyringePumpPro using your same pump network - 2 Ne-500's and a single NE-1000.
Good luck
Tim Burgess
CEO SyringePumpPro
[email protected]

Similar Messages

  • Data Pump Export error - network mounted path

    Hi,
    Please have a look at the data pump error i am getting while doing export. I am running on version 11g . Please help with your feedback..
    I am getting error due to Network mounted path for directory OverLoad it works fine with local path. i have given full permissions on network path and utl_file able to create files but datapump fail with below error messages.
    Oracle 11g
    Solaris 10
    Getting below error :
    ERROR at line 1:
    ORA-39001: invalid argument value
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 3444
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 3693
    ORA-06512: at line 64
    DECLARE
    p_part_name VARCHAR2(30);
    p_msg VARCHAR2(512);
    v_ret_period NUMBER;
    v_arch_location VARCHAR2(512);
    v_arch_directory VARCHAR2(20);
    v_rec_count NUMBER;
    v_partition_dumpfile VARCHAR2(35);
    v_partition_dumplog VARCHAR2(35);
    v_part_date VARCHAR2(30);
    p_partition_name VARCHAR2(30);
    v_partition_arch_location VARCHAR2(512);
    h1 NUMBER; -- Data Pump job handle
    job_state VARCHAR2(30); -- To keep track of job state
    le ku$_LogEntry; -- For WIP and error messages
    js ku$_JobStatus; -- The job status from get_status
    jd ku$_JobDesc; -- The job description from get_status
    sts ku$_Status; -- The status object returned by get_status
    ind NUMBER; -- Loop index
    percent_done NUMBER; -- Percentage of job complete
    --check dump file exist on directory
    l_file utl_file.file_type;   
    l_file_name varchar2(20);
    l_exists boolean;
    l_length number;
    l_blksize number;
    BEGIN
    p_part_name:='P2010110800';
    p_partition_name := upper(p_part_name);
    v_partition_dumpfile :=  chr(39)||p_partition_name||chr(39);
    v_partition_dumplog  :=  p_partition_name || '.LOG';
         SELECT COUNT(*) INTO v_rec_count FROM HDB.PARTITION_BACKUP_MASTER WHERE PARTITION_ARCHIVAL_STATUS='Y';
             IF v_rec_count != 0 THEN
               SELECT
               PARTITION_ARCHIVAL_PERIOD
               ,PARTITION_ARCHIVAL_LOCATION
               ,PARTITION_ARCHIVAL_DIRECTORY
               INTO v_ret_period , v_arch_location , v_arch_directory
               FROM HDB.PARTITION_BACKUP_MASTER WHERE PARTITION_ARCHIVAL_STATUS='Y';
             END IF;
         utl_file.fgetattr('ORALOAD', l_file_name, l_exists, l_length, l_blksize);      
            IF (l_exists) THEN        
             utl_file.FRENAME('ORALOAD', l_file_name, 'ORALOAD', p_partition_name ||'_'|| to_char(systimestamp,'YYYYMMDDHH24MISS') ||'.DMP', TRUE);
         END IF;
        v_part_date := replace(p_partition_name,'P');
            DBMS_OUTPUT.PUT_LINE('inside');
        h1 := dbms_datapump.open (operation => 'EXPORT',
                                  job_mode  => 'TABLE'
              dbms_datapump.add_file (handle    => h1,
                                      filename  => p_partition_name ||'.DMP',
                                      directory => v_arch_directory,
                                      filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
              dbms_datapump.add_file (handle    => h1,
                                      filename  => p_partition_name||'.LOG',
                                      directory => v_arch_directory,
                                      filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
              dbms_datapump.metadata_filter (handle => h1,
                                             name   => 'SCHEMA_EXPR',
                                             value  => 'IN (''HDB'')');
              dbms_datapump.metadata_filter (handle => h1,
                                             name   => 'NAME_EXPR',
                                             value  => 'IN (''SUBSCRIBER_EVENT'')');
              dbms_datapump.data_filter (handle      => h1,
                                         name        => 'PARTITION_LIST',
                                        value       => v_partition_dumpfile,
                                        table_name  => 'SUBSCRIBER_EVENT',
                                        schema_name => 'HDB');
              dbms_datapump.set_parameter(handle => h1, name => 'COMPRESSION', value => 'ALL');
              dbms_datapump.start_job (handle => h1);
                  dbms_datapump.detach (handle => h1);              
    END;
    /

    Hi ,
    I tried to generate dump with expdp instead of API, got more specific error logs.
    but on same path log file got create.
    expdp hdb/hdb DUMPFILE=P2010110800.dmp DIRECTORY=ORALOAD TABLES=(SUBSCRIBER_EVENT:P2010110800) logfile=P2010110800.log
    Export: Release 11.2.0.1.0 - Production on Wed Nov 10 01:26:13 2010
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, Automatic Storage Management, OLAP, Data Mining
    and Real Application Testing options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "/nfs_path/lims/backup/hdb/datapump/P2010110800.dmp"
    ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
    Additional information: 3Edited by: Sachin B on Nov 9, 2010 10:33 PM

  • DBMS_DATAPUMP - can it pump to a network drive?

    I'm trying to execute a PL/SQL block that looks like this:
    DECLARE
    PUMP_HANDLE NUMBER := 0;
    STATE_OF_JOB VARCHAR2(30) := 'UNKNOWN';
    FULL_STATUS KU$_STATUS := NULL;
    BEGIN
    PUMP_HANDLE := DBMS_DATAPUMP.OPEN ('EXPORT','TABLE');
    DBMS_DATAPUMP.ADD_FILE (HANDLE => PUMP_HANDLE, FILENAME => 'b13.dmp', DIRECTORY => 'DARREN_S_2ND_DIR', FILESIZE => '500MB', FILETYPE => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
    END;
    And the ADD_FILE line returns this stack trace:
    ORA-39001: invalid argument value
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 2486
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 2718
    ORA-06512: at line 6
    DARREN_S_2ND_DIR was created like this, where H: is a network drive (that the database server has access to, obviously):
    create or replace directory DARREN_S_2ND_DIR as 'H:\DMORBY\GRANULAR_BACKUP';
    It seems to work okay when a different directory that is created as a local directory on the server. (It's Oracle 10g on Windows Server 2003.)
    Is it possible that creating a directory on a network drive is the problem, or have I missed something else?

    Go to task manager and see who is running the oracle.exe process. Or check the services applet and look at the "Log On As" for your db service. If it's you as well, then I dunno. If it's Local System, then you need to modify the service to Log On As a real account with privs to that drive. After you do that, shutdown the database and restart the service.
    Edit:
    And of course you may need to change ownership of all the database files, as well.

  • Problem controllin​g peristalti​c pump

    We are tyring out out labveiw 9.0 to determine if we want to use it to control instruments for our epxeriments and are having allot of trouble hooking up our peristaltic pump to the computer. It is a ismatec reglo digital pump, which labview does not have a plug and play driver for.
    Instead we downloaded the labveiw driver provided by isamtec, but have not been able to get it to work since we cannot find any directions on how to install and operate a 2nd party driver. The following error message keeps coming up whenever we try to plug the pump in. 
    Error -1073807202 occurred at Property Node (arg 1) in VISA Configure Serial Port (Instr).vi->RegloPumpenDriver.vi->Controlling_the_​Pump_Reglo_D.vi->Reglo Digital.vi
    Does anyone know how to hook up this type of insturment to labveiw?
    Thanks

    Hi
    The explanation for that error code is:
    LabVIEW:  (Hex 0xBFFF009E) VISA or a code library required by VISA could not be located or loaded.  This is usually due to a required driver not being installed on the system.
    =========================
    VISA:  (Hex 0xBFFF009E) A code library required by VISA could not be located or loaded.
    I had a look at the driver and it was written for LabVIEW 6.2 so some thing will have changed in LabVIEW 2009. The Driver consists of a number of VIs (programs) that call a lower level driver, NI-VISA. What version of VISA do you have installed? You can check that in the Measurement and Automation Explorer (MAX) under "Software".
    Best Regards
    David
    NISW

  • UDI-00018: Data Pump client is incompatible with database version 11.2.0.1

    Hi
    I am trying to import data in Oracle 11g Release2(11.2.0.1) using impdp utitlity and getting below errror
    UDI-00018: Data Pump client is incompatible with database version 11.2.0.1.0
    Export dump has taken in database with oracle 11g Release 1(11.1.0.7.0) and I am trying to import in higher version of the database. Is there any parameter I have to set to avoid this error?

    AUTHSTATE=compat
    A__z=! LOGNAME
    CLASSPATH=/app/oracle/11.2.0/jlib:.
    HOME=/home/oracle
    LANG=C
    LC__FASTMSG=true
    LD_LIBRARY_PATH=/app/oracle/11.2.0/lib:/app/oracle/11.2.0/network/lib:.
    LIBPATH=/app/oracle/11.2.0/JDK/JRE/BIN:/app/oracle/11.2.0/jdk/jre/bin/classic:/app/oracle/11.2.0/lib32
    LOCPATH=/usr/lib/nls/loc
    LOGIN=oracle
    LOGNAME=oracle
    MAIL=/usr/spool/mail/oracle
    MAILMSG=[YOU HAVE NEW MAIL]
    NLSPATH=/usr/lib/nls/msg/%L/%N:/usr/lib/nls/msg/%L/%N.cat
    NLS_DATE_FORMAT=DD-MON-RRRR HH24:MI:SS
    ODMDIR=/etc/objrepos
    ORACLE_BASE=/app/oracle
    ORACLE_HOME=/app/oracle/11.2.0
    ORACLE_SID=AMT6
    ORACLE_TERM=xterm
    ORA_NLS33=/app/oracle/11.2.0/nls/data
    PATH=/app/oracle/11.2.0/bin:.:/usr/bin:/etc:/usr/sbin:/usr/ucb:/home/oracle/bin:/usr/bin/X11:/sbin:.:/usr/local/bin:/usr/ccs/bin
    PS1=nbsud01[$PWD]:($ORACLE_SID)>
    PWD=/nbsiar/nbimp
    SHELL=/usr/bin/ksh
    SHLIB_PATH=/app/oracle/11.2.0/lib:/usr/lib
    TERM=xterm
    TZ=Europe/London
    USER=oracle
    _=/usr/bin/env

  • Cannot get distcc/pump to work

    I have two machines on a GigaLAN both running distccd. 
    1) Workstation (Intel i7-3770K) with 8 logical CPUs.
    2) Laptop (Intel C2D) with 2 logical CPUs.
    On my workstation:
    % cat /etc/distcc/hosts
    localhost/9,cpp,lzo
    192.168.0.102/3,cpp,lzo
    On my laptop:
    % cat /etc/distcc/hosts
    localhost/3,cpp,lzo
    192.168.0.101/9,cpp,lzo
    Just the workstation
    Here is just the workstation
    % make -j8 bzImage 496.11s user 25.67s system 712% cpu 1:13.26 total
    Trying to use pump/distcc
    On the laptop...
    % eval pump --startup
    On the workstation...
    % eval pump --startup
    % pump make -j12 bzImage CC=distcc
    __________Using distcc-pump from /usr/bin
    __________Using 2 distcc servers in pump mode
    make[1]: Nothing to be done for `all'.
    HOSTCC scripts/basic/fixdep
    CHK include/generated/uapi/linux/version.h
    CC arch/x86/xen/setup.o
    CC arch/x86/realmode/rm/video-vga.o
    CC arch/x86/mm/fault.o
    CC arch/x86/realmode/rm/video-vesa.o
    distcc[25252] ERROR: compile arch/x86/realmode/rm/video-vesa.c on 192.168.0.102/3,cpp,lzo failed
    distcc[25252] (dcc_build_somewhere) Warning: remote compilation of 'arch/x86/realmode/rm/video-vesa.c' failed, retrying locally
    distcc[25214] ERROR: compile arch/x86/realmode/rm/video-vga.c on 192.168.0.102/3,cpp,lzo failed
    distcc[25214] (dcc_build_somewhere) Warning: remote compilation of 'arch/x86/realmode/rm/video-vga.c' failed, retrying locally
    distcc[25252] Warning: failed to distribute arch/x86/realmode/rm/video-vesa.c to 192.168.0.102/3,cpp,lzo, running locally instead
    distcc[25214] Warning: failed to distribute arch/x86/realmode/rm/video-vga.c to 192.168.0.102/3,cpp,lzo, running locally instead
    CC mm/mempool.o
    distcc[25252] (dcc_please_send_email_after_investigation) Warning: remote compilation of 'arch/x86/realmode/rm/video-vesa.c' failed, retried locally and got a different result.
    distcc[25252] (dcc_note_discrepancy) Warning: now using plain distcc, possibly due to inconsistent file system changes during build
    CC kernel/panic.o
    distcc[25214] (dcc_please_send_email_after_investigation) Warning: remote compilation of 'arch/x86/realmode/rm/video-vga.c' failed, retried locally and got a different result.
    CC arch/x86/realmode/rm/video-bios.o
    CC arch/x86/xen/multicalls.o
    PASYMS arch/x86/realmode/rm/pasyms.h
    LDS arch/x86/realmode/rm/realmode.lds
    LD arch/x86/realmode/rm/realmode.elf
    RELOCS arch/x86/realmode/rm/realmode.relocs
    OBJCOPY arch/x86/realmode/rm/realmode.bin
    AS arch/x86/realmode/rmpiggy.o
    OBJCOPY arch/x86/boot/setup.bin
    BUILD arch/x86/boot/bzImage
    Setup is 17036 bytes (padded to 17408 bytes).
    System is 3429 kB
    CRC fa62427
    Kernel: arch/x86/boot/bzImage is ready (#10)
    __________Warning: 3 pump-mode compilation(s) failed on server, but succeeded locally.
    __________Distcc-pump was demoted to plain mode. See the Distcc Discrepancy Symptoms section in the include_server(1) man page.
    __________Shutting down distcc-pump include server
    pump make -j12 bzImage CC=distcc 373.40s user 18.70s system 455% cpu 1:26.10 total
    Love to figure out what's going wrong.
    Note that I can use makepkg to build using just distcc (no pump mode) just fine.
    /etc/makepkg.conf
    MAKEFLAGS="-j12"
    BUILDENV=(distcc fakeroot color !ccache check !sign)
    DISTCC_HOSTS="localhost/9 192.168.0.102/3"
    Last edited by graysky (2012-12-20 00:05:29)

    Hi @utopian,
    Welcome to the HP Forums!
    I understand that you cannot scan with your HP Officejet Pro 8500 on Windows 7 using a wireless network. I am happy to look into this issue for you!
    Please take a look through this scanning guide, A 'Connection Error' or 'No Computer Detected' Error Message Displays during Scanning for HP Officej....
    Hope this guide helps!  
    RnRMusicMan
    I work on behalf of HP
    Please click “Accept as Solution ” if you feel my post solved your issue, it will help others find the solution.
    Click the “Kudos Thumbs Up" to say “Thanks” for helping!

  • How would you use these Apple devices to create the best network?

    I have the following devices and want to create the best network for my 3 story home:
    Time Capsule 3GB, latest generation (needs to be located on middle floor in a corner room by the Comcast modem)
    Time Capsule late 2009
    Airport Express 2010
    More data, the devices that connect to this network:
    New MacBook pro (in same room with new Time Capsule)
    New iMac (in same room with Time Capsule)
    2009 iMac (on second floor)
    Iphone 5 and 5s
    ipad Minis
    ipad Airs
    Apple TV second generation
    I've done some testing and the signal is pretty good on the second floor (meaning that the download speeds on the second floor iMac are the same as those in the same room with the new Time Capsule).  But the upstairs bedrooms have speeds cut by 50%.
    There is no reasonable way to connect the new Time Capsule by ethernet to any other devices.
    Would it be helpful to extend the wireless network "wirelessly" with the old Time Capsule in an area upstairs with strong signal?  Would the old TC then "re-pump" the signal into the bedrooms which don't have line of sight with the new TC?
    How would I best do this?
    Thanks for any help you can offer.

    Time Capsule 3GB, latest generation (needs to be located on middle floor in a corner room by the Comcast modem)
    Time Capsule late 2009
    Airport Express 2010
    So, the old Time Capsule and AirPort Express are near the new Time Capsule and connected to it with an Ethernet cable.....or are the old Time Capsule and Express not even in use at all?
    Would it be helpful to extend the wireless network "wirelessly" with the old Time Capsule in an area upstairs with strong signal?
    While it would be better if you could connect the old Time Capsule to the new Time Capsule using a wired Ethernet connection, the old Time Capsule might help things out a bit in the bedroom area if it is located where it can receive a strong wireless signal from the new Time Capsule.
    This is one of those situations where performance cannot be predicted and you simply have to try it out in your home to see how well it will work. It could not hurt to use the old TC to extend the signal wirelessly to the bedroom area, if you want to give that a try.
    You did not mention the AirPort Express, but it could possibly be located downstairs if you need more wireless signal coverage in a given area. The trick again, is to find a location where the Express can both receive a strong wireless signal and also provide additional wireless coverage.
    Setting up the old Time Capsule or Express will only take a few minutes if you want to give things a try. Post back if you need more details on "how" to do this.

  • How to find table with colum that not support by data pump network_link

    Hi Experts,
    We try to import a database to new DB by data pump network_link.
    as oracle statement, Tables with columns that are object types are not supported in a network export. An ORA-22804 error will be generated and the export will move on to the next table. To work around this restriction, you can manually create the dependent object types within the database from which the export is being run.
    My question, how to find these tables with colum that that are object types are not supported in a network export.
    We have LOB object and oracle spital SDO_GEOMETRY object type. our database size is about 300G. nornally exp will takes 30 hours.
    We try to use data pump with network_link to speed export process.
    How do we fix oracle spital users type SDO_GEOMETRY issue during data pump?
    our system is 32 bit window 2003 and 10GR2 database.
    Thanks
    Jim
    Edited by: user589812 on Nov 3, 2009 12:59 PM

    Hi,
    I remember there being issues with sdo_geometry and DataPump. You may want to contact oracle support with this issue.
    Dean

  • CReating a home music network server

    Hello folks I am after some advice.
    We are due to move house, and this is a significant move up the ladder with a conservatory, living room, dining room and three bedrooms.
    I currently have a small home network of an intel iMac with USB harddrive with the music for itunes it. And I have an iBook. Currently the iMac feeds my home stereo via optical out into a dac.
    In the new house I intend to have the iMac in the dining room and use the ibook to stream music to the two hifis one in the conservatory and one in the sitting room. This will be accomplished with two Airport expresses and my wireless internet router.
    Looking around eBay I find the price of G3 Powermacs have dropped madly to around 50 quid. This is almost to good to be true as I have room under the stairs in the new place to stick it so here are my questions.
    1. Are internal harddrives still available for a G3 powermac?
    2. All the ones I have seen have OS9 on them, I would simply be using it as back up and music storage to be accessed from my iMac and iBook, will this be possible with OS9 on?
    3. I do not intend attaching a keyboard, mouse and screen to the mac, or at least in the long term, is there any way I can VNC to OS9 from OSX? or is there another way?
    4. Will OS9 be stable enough, i.e. I would want it running 24/7
    5. Finally would there be anyway to tell itunes the music is on a network in OS9 and would there be any way to have the G3 show up on my macs permanently, i.e. I don't have to keep hitting 'go' and pray it shows up? (At work for instance network places on the PC just show up on the desktop as long as they are available.)
    Any help or advice greatly appreciated.
    BTW What I am looking at is keeping my iMac free for other things, for instance if I want to play a computer game on my iMac whilst my wife listening to streamed music in the living room. And secondly I want this to be as cheap as possibly so no Squeezeboxes!
    Cheers

    Alas I see one error of my way, ideally I don't want to be streaming music with VNC, I want to be able to pick music on my ibook to be pumped from the G3. I find with the normal network sharing you can't search tunes which is really bad so I am hoping the G3 will be the actual storage area for itunes music and will be able to stream from the G3 but controlled form the iBook.

  • Getting Locations and various WIFI networks under better control...

    I have several networks that I use. I have a strong preference for some reason to use OpenDNS.
    -Home 1 (Airport Extreme, WPA2 Personal, Static IP although DHCP for everyone else - Bonjour printer attached to AE)
    -Home 2 (Airport Extreme, WPA2 Personal, Static IP although DHCP for everyone else)
    School (802.1X, WPA Enterprise)
    -Work (Linksys Wireless G Access Point, Static IP, their own DNS servers (can I still use OpenDNS?), primarily a wired Windows workgroup, but all on the same network, so I can still use Bonjour to get a shared printer)
    -Random networks....just want to access them and use DHCP with OpenDNS.
    So is it possible to cut down on some of the locations? I usually have to select my location when I move, and recently have had to additionally select the WiFi network from the Airport Menu, it has always been automatic I think. I would love a seemless network switch based on which networks are found in range... is this possible without 3rd party, or do I need to go that direction?
    My iPhone does this beautifully and automatically... how can my MBP do this? (exceptions being printers and that iPhone will not connect to my school's primary network that I mentioned above, I use their secondary network which I have to lease for an hour at a time - I'm pumped to have my legit 2.0 firmware).
    Thanks for your help.

    I just erase and installed Leopard on my angry MBP that always gets ****** off (it's been through 3-4 migrations and restores). I spared no mercy to preferences or backed up programs, all from scratch, just documents, music, movies etc. in my home folder.
    So I got to play with a very fresh and clean list of settings for the first time in a while.
    My iPhone... manages manual IPs and server settings based on the WiFi network that I am on. When I am at home on one network, I have DHCP with a manual address. When I'm at work, I have a different gateway and static IP address.
    Network manual IPs do not change with Leopard do they? The way that I have to manage my static IPs right now is by creating new locations and manually switching at each location. Isn't there some way for this to happen automatically.
    I wake up, check my email at home, on a static IP. Boom.
    I go to work, open my laptop, get on the internet. Boom.
    Forgetting locations, is this possible? My iPhone does it without complaint...

  • Data pump using network_link problem

    Hi,
    I defined a database link for the sys user as:
    create database link xe_remote
    connect to hr
    identified by hr
    using 'xe_remote'
    and it works fine:
    select * from dual@xe_remote
    D
    X
    Then, tried to import with data pump the user hr from xe_remote in the local system, but it seemed that the database link wasn't valid anymore:
    impdp system/pwd schemas=hr network_link=xe_remote remap_schema=hr:hr_remote table_exists_action=replace
    ORA-39001: invalid argument value
    ORA-39200: Link name "xe_remote" is invalid
    ORA-02019: connection description for remote database not found
    Thanks,
    Gianluca

    Hello
    oerr says:
    oerr ora 39149
    39149, 00000, "cannot link privileged user to non-privileged user"
    // *Cause:  A Data Pump job initiated be a user with
    // EXPORT_FULL_DATABASE/IMPORT_FULL_DATABASE roles specified a
    // network link that did not correspond to a user with
    // equivalent roles on the remote database.
    // *Action: Specify a network link that maps users to identically privileged
    // users in the remote database.
    Solution:
    Create a db link from hr_remote to hr
    or
    Create a db link from system to system (I think, it's easier, because you have all rights...)
    Greetings
    Sven

  • Database Upgrade using Data Pump

    Hi,
    I am moving my database from a Windows 2003 server to a Windows 2007 server. At the same time I am upgrading this database from 10g to 11gR2(11.2.0.3).
    therefore I am using the export / import method of upgrade ( via Data Pump not the old exp/imp ).
    I have successfully exported by source database and have created the empty shell database ready to take the import. However I have a couple of queries
    Q1. regarding all the SYSTEM objects from the source database. How will they import given that the new target database already has a SYSTEM tablespace
    I am guessing I need to use the TABLE_EXISTS_ACTION option for the import. However should I set this to APPEND, SKIP, REPLACE or TRUNCATE - which is best ?
    Q2. I am planning to slightly change the directory structure on the new database server - would it therefore be better to pre-create the tablespaces or leave this to the import but use the REMAP DATAFILE option - what is everyone's experience as to which is the better way to go ? Again if I pre-create the tablespaces, how do I inform the import to ignore the creation of the tablespaces
    Q3. these 2 databases are on the same network, so in theorey instead of a manual export, copy of the dump file to the new server and then the import, I could use a Network Link for Import. I was just wondering where there any con's of this method over using the explicit export dump file ?
    thanks,
    Jim

    Jim,
    Q1. regarding all the SYSTEM objects from the source database. How will they import given that the new target database already has a SYSTEM tablespace
    I am guessing I need to use the TABLE_EXISTS_ACTION option for the import. However should I set this to APPEND, SKIP, REPLACE or TRUNCATE - which is best ?If all you have is the base database and nothing created, then you can do the full=y. In fact, this is probably what you want. The system tablespace will be there so when Data Pump tries to create it , it will just fail that create statement. Nothing else will fail. In most cases, your system tables will already be there, and this is ok too. If you do schema mode imports, you will miss out on some of the other stuff.
    Q2. I am planning to slightly change the directory structure on the new database server - would it therefore be better to pre-create the tablespaces or leave this to the import but use the REMAP >DATAFILE option - what is everyone's experience as to which is the better way to go ? Again if I pre-create the tablespaces, how do I inform the import to ignore the creation of the tablespacesIf the directory structure is different (which they usually are) then there is no easier way. You can run impdp but with sqlfile and you can say - include=tablespace. This will give you all of the create tablespace commands in a txt file and you can edit the text file to change what ever you want to change. You can tell datapump to skip the tablespace creation by using - exclude=tablespace
    Q3. these 2 databases are on the same network, so in theorey instead of a manual export, copy of the dump file to the new server and then the import, I could use a Network Link for Import. I >was just wondering where there any con's of this method over using the explicit export dump file ?The only con could be if you have a slow network. This will make it slower, but if you have to copy the dumpfile over the same network, then you will still see the same basic traffic. The pros are that you don't have to have extra disk space. Here is how I look at it.
    1. you need XX GB for the source database
    2. you need YY GB for the source dumpfile
    3. you need YY GB for the target dumpfile that you copy
    4. you need XX GB for the target databse.
    By doing network you get rid if 2*YY GB for the dumpfiles.
    Dean

  • Import Data over network link in oracle 11g

    We want to take export of the OND schema in production database and
    import it to the OND schema in UAT database over a network
    link by using data pump,in Oracle 11g.Kindly share the steps.

    Scenario:
    Directly importing the TEST01 schema in the production database (oraodrmu) to test database oraodrmt, over
    a network by using database link and data pump in Oracle 11g.
    Note: When you perform an import over a database link, the import source is a database, not a dump file set, and the data is imported to the connected database instance.
    Because the link can identify a remotely networked database, the terms database link and network link are used interchangeably.
    =================================================================
    STEP-1 (IN PRODUCTION DATABASE - oraodrmu)
    =================================================================
    [root@szoddb01]>su - oraodrmu
    Enter user-name: /as sysdba
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> grant resource to test01;
    Grant succeeded.
    SQL> grant imp_full_database to test01;
    Grant succeeded.
    SQL> select owner,object_type,status,count(*) from dba_objects where owner='TEST01' group by owner,object_type,status;
    OWNER OBJECT_TYPE STATUS COUNT(*)
    TEST01 PROCEDURE     VALID 2
    TEST01 TABLE VALID 419
    TEST01 SEQUENCE VALID 3
    TEST01 FUNCTION VALID 8
    TEST01 TRIGGER VALID 3
    TEST01 INDEX VALID 545
    TEST01 LOB VALID 18
    7 rows selected.
    SQL>
    SQL> set pages 999
    SQL> col "size MB" format 999,999,999
    SQL> col "Objects" format 999,999,999
    SQL> select obj.owner "Owner"
    2 , obj_cnt "Objects"
    3 , decode(seg_size, NULL, 0, seg_size) "size MB"
    4 from (select owner, count(*) obj_cnt from dba_objects group by owner) obj
    5 , (select owner, ceil(sum(bytes)/1024/1024) seg_size
    6 from dba_segments group by owner) seg
    7 where obj.owner = seg.owner(+)
    8 order by 3 desc ,2 desc, 1
    9 /
    Owner Objects size MB
    OND                    8,097     284,011
    SYS                    9,601     1,912
    TEST01                    998     1,164
    3 rows selected.
    SQL> exit
    =================================================================
    STEP-2 (IN TEST DATABASE - oraodrmt)
    =================================================================
    [root@szoddb01]>su - oraodrmt
    [oraodrmt@szoddb01]>sqlplus
    SQL*Plus: Release 11.2.0.2.0 Production on Mon Dec 3 18:40:16 2012
    Copyright (c) 1982, 2010, Oracle. All rights reserved.
    Enter user-name: /as sysdba
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> select name,open_mode from v$database;
    NAME OPEN_MODE
    ODRMT READ WRITE
    SQL> create tablespace test_test datafile '/trn_u04/oradata/odrmt/test01.dbf' size 2048m;
    Tablespace created.
    SQL> create user test01 identified by test123 default tablespace test_test;
    User created.
    SQL> grant resource, create session to test01;
    Grant succeeded.
    SQL> grant EXP_FULL_DATABASE to test01;
    Grant succeeded.
    SQL> grant imp_FULL_DATABASE to test01;
    Grant succeeded.
    Note: ODRMU is the DNS hoste name.We can test the connect with: [oraodrmt@szoddb01]>sqlplus test01/test01@odrmu
    SQL> create directory test_network_dump as '/dbdump/test_exp';
    Directory created.
    SQL> grant read,write on directory test_network_dump to test01;
    Grant succeeded.
    SQL> conn test01/test123
    Connected.
    SQL> create DATABASE LINK remote_test CONNECT TO test01 identified by test01 USING 'ODRMU';
    Database link created.
    For testing the database link we can try the below sql:
    SQL> select count(*) from OA_APVARIABLENAME@remote_test;
    COUNT(*)
    59
    SQL> exit
    [oraodrmt@szoddb01]>impdp test01/test123 network_link=remote_test directory=test_network_dump remap_schema=test01:test01 logfile=impdp__networklink_grms.log;
    [oraodrmt@szoddb01]>
    Import: Release 11.2.0.2.0 - Production on Mon Dec 3 19:42:47 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "TEST01"."SYS_IMPORT_SCHEMA_01": test01/******** network_link=remote_test directory=test_network_dump remap_schema=test01:test01 logfile=impdp_grms_networklink.log
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 318.5 MB
    Processing object type SCHEMA_EXPORT/USER
    ORA-31684: Object type USER:"TEST01" already exists
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    . . imported "TEST01"."SY_TASK_HISTORY" 779914 rows
    . . imported "TEST01"."JCR_JNL_JOURNAL" 603 rows
    . . imported "TEST01"."GX_GROUP_SHELL" 1229 rows
    Job "TEST01"."SYS_IMPORT_SCHEMA_01" completed with 1 error(s) at 19:45:19
    [oraodrmt@szoddb01]>sqlplus
    SQL*Plus: Release 11.2.0.2.0 Production on Mon Dec 3 19:46:04 2012
    Copyright (c) 1982, 2010, Oracle. All rights reserved.
    Enter user-name: /as sysdba
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> select owner,object_type,status,count(*) from dba_objects where owner='TEST01' group by owner,object_type,status;
    OWNER OBJECT_TYPE STATUS COUNT(*)
    TEST01 PROCEDURE          VALID 2
    TEST01 TABLE               VALID 419
    TEST01 SEQUENCE          VALID 3
    TEST01 FUNCTION          VALID 8
    TEST01 TRIGGER          VALID 3
    TEST01 INDEX               VALID 545
    TEST01 LOB               VALID 18
    TEST01 DATABASE LINK          VALID 1
    8 rows selected.
    SQL>
    SQL> set pages 999
    SQL> col "size MB" format 999,999,999
    SQL> col "Objects" format 999,999,999
    SQL> select obj.owner "Owner"
    2 , obj_cnt "Objects"
    3 , decode(seg_size, NULL, 0, seg_size) "size MB"
    4 from (select owner, count(*) obj_cnt from dba_objects group by owner) obj
    5 , (select owner, ceil(sum(bytes)/1024/1024) seg_size
    6 from dba_segments group by owner) seg
    7 where obj.owner = seg.owner(+)
    8 order by 3 desc ,2 desc, 1
    9 /
    Owner Objects size MB
    OND                8,065          247,529
    SYS               9,554          6,507
    TEST01               999          1,164
    13 rows selected.
    =================================================================
    STEP-3 FOR REMOVING THE DATABASE LINK
    =================================================================
    [oraodrmt@szoddb01]>sqlplus
    SQL*Plus: Release 11.2.0.2.0 Production on Mon Dec 3 19:16:01 2012
    Copyright (c) 1982, 2010, Oracle. All rights reserved.
    Enter user-name: /as sysdba
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> drop database link remote_test;
    Database link dropped.

  • Failover Cluster Manager - Partioned Networks Server 2012

    Hello,
    I have a DR cluster that I am trying to validate. It has 3 nodes. Each has a teamed vEthernet adapter for Cluster Communications and Live Migration. I can start cluster service on 2 of the 3 nodes without the networks entering a partioned state. However,
    I can only ping the I
    Ps for those adapters from their own server . Also, It doesn't matter which 2 nodes are brought up. Any order will produce the same results. Validation gives the following error for connections between all nodes:
    Network interfaces DRHost3.m1ad.xxxxxx.biz - vEthernet
    (vNIC-LiveMig) and DRHost2.m1ad.xxxxxx.biz - vEthernet (vNIC-LiveMig) are on
    the same cluster network, yet address 192.168.xxx.xx is not reachable from
    192.168.xxx.xx using UDP on port 3343.
    Update: I have created a specific inbound rule on the server firewalls for port 3343. The networks no longer show as partitioned. However, I still receive the same errors about communication on port 3343 to and from all nodes on the LiveMig and ClustPriv
    networks. Any help would be appreciated.
    Brian Gilmore Lead IT Technician Don-Nan Pump & Supply

    Windows IP Configuration
       Host Name . . . . . . . . . . . . : DRHost1
       Primary Dns Suffix  . . . . . . . : m1ad.don-nan.biz
       Node Type . . . . . . . . . . . . : Hybrid
       IP Routing Enabled. . . . . . . . : No
       WINS Proxy Enabled. . . . . . . . : No
       DNS Suffix Search List. . . . . . : m1ad.don-nan.biz
    Ethernet adapter vEthernet (VM Public Network):
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #2
       Physical Address. . . . . . . . . : 14-FE-B5-CA-35-6C
       DHCP Enabled. . . . . . . . . . . : No
       Autoconfiguration Enabled . . . . : Yes
       Link-local IPv6 Address . . . . . : fe80::609a:8da3:7bce:c32f%31(Preferred)
       IPv4 Address. . . . . . . . . . . : 192.168.9.113(Preferred)
       Subnet Mask . . . . . . . . . . . : 255.255.255.0
       Default Gateway . . . . . . . . . : 192.168.9.1
       DHCPv6 IAID . . . . . . . . . . . : 1091894965
       DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-F2-58-CA-14-FE-B5-CA-35-6E
       DNS Servers . . . . . . . . . . . : 192.168.9.21
                                           192.168.9.23
       NetBIOS over Tcpip. . . . . . . . : Enabled
    Ethernet adapter vEthernet (vNIC-ClusterPriv):
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #3
       Physical Address. . . . . . . . . : 00-1D-D8-B7-1C-14
       DHCP Enabled. . . . . . . . . . . : No
       Autoconfiguration Enabled . . . . : Yes
       Link-local IPv6 Address . . . . . : fe80::2481:996:cf44:dc3d%32(Preferred)
       IPv4 Address. . . . . . . . . . . : 192.168.108.31(Preferred)
       Subnet Mask . . . . . . . . . . . : 255.255.255.0
       Default Gateway . . . . . . . . . :
       DHCPv6 IAID . . . . . . . . . . . : 1258298840
       DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-F2-58-CA-14-FE-B5-CA-35-6E
       DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1
                                           fec0:0:0:ffff::2%1
                                           fec0:0:0:ffff::3%1
       NetBIOS over Tcpip. . . . . . . . : Disabled
    Ethernet adapter vEthernet (vNIC-LiveMig):
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #4
       Physical Address. . . . . . . . . : 00-1D-D8-B7-1C-15
       DHCP Enabled. . . . . . . . . . . : No
       Autoconfiguration Enabled . . . . : Yes
       Link-local IPv6 Address . . . . . : fe80::f884:a35d:aa43:720e%33(Preferred)
       IPv4 Address. . . . . . . . . . . : 192.168.109.31(Preferred)
       Subnet Mask . . . . . . . . . . . : 255.255.255.0
       Default Gateway . . . . . . . . . :
       DHCPv6 IAID . . . . . . . . . . . : 1358962136
       DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-F2-58-CA-14-FE-B5-CA-35-6E
       DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1
                                           fec0:0:0:ffff::2%1
                                           fec0:0:0:ffff::3%1
       NetBIOS over Tcpip. . . . . . . . : Disabled
    Ethernet adapter BC-PCI3 - iSCSI1:
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : Broadcom BCM57711 NetXtreme II 10 GigE (N
    DIS VBD Client) #49
       Physical Address. . . . . . . . . : 00-0A-F7-2E-B5-3C
       DHCP Enabled. . . . . . . . . . . : No
       Autoconfiguration Enabled . . . . : Yes
       IPv4 Address. . . . . . . . . . . : 192.168.107.22(Preferred)
       Subnet Mask . . . . . . . . . . . : 255.255.255.0
       Default Gateway . . . . . . . . . :
       NetBIOS over Tcpip. . . . . . . . : Disabled
    Ethernet adapter BC-PCI4 - iSCSI2:
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : Broadcom BCM57711 NetXtreme II 10 GigE (N
    DIS VBD Client) #50
       Physical Address. . . . . . . . . : 00-0A-F7-2E-B5-3E
       DHCP Enabled. . . . . . . . . . . : No
       Autoconfiguration Enabled . . . . : Yes
       IPv4 Address. . . . . . . . . . . : 192.168.107.23(Preferred)
       Subnet Mask . . . . . . . . . . . : 255.255.255.0
       Default Gateway . . . . . . . . . :
       NetBIOS over Tcpip. . . . . . . . : Disabled
    Ethernet adapter vEthernet (VM Public Network):
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #2
       Physical Address. . . . . . . . . : 14-FE-B5-CA-35-6C
       DHCP Enabled. . . . . . . . . . . : No
       Autoconfiguration Enabled . . . . : Yes
       Link-local IPv6 Address . . . . . : fe80::609a:8da3:7bce:c32f%31(Preferred)
       IPv4 Address. . . . . . . . . . . : 192.168.9.113(Preferred)
       Subnet Mask . . . . . . . . . . . : 255.255.255.0
       Default Gateway . . . . . . . . . : 192.168.9.1
       DHCPv6 IAID . . . . . . . . . . . : 1091894965
       DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-F2-58-CA-14-FE-B5-CA-35-6E
       DNS Servers . . . . . . . . . . . : 192.168.9.21
                                           192.168.9.23
       NetBIOS over Tcpip. . . . . . . . : Enabled
    Ethernet adapter vEthernet (vNIC-ClusterPriv):
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #3
       Physical Address. . . . . . . . . : 00-1D-D8-B7-1C-14
       DHCP Enabled. . . . . . . . . . . : No
       Autoconfiguration Enabled . . . . : Yes
       Link-local IPv6 Address . . . . . : fe80::2481:996:cf44:dc3d%32(Preferred)
       IPv4 Address. . . . . . . . . . . : 192.168.108.31(Preferred)
       Subnet Mask . . . . . . . . . . . : 255.255.255.0
       Default Gateway . . . . . . . . . :
       DHCPv6 IAID . . . . . . . . . . . : 1258298840
       DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-F2-58-CA-14-FE-B5-CA-35-6E
       DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1
                                           fec0:0:0:ffff::2%1
                                           fec0:0:0:ffff::3%1
       NetBIOS over Tcpip. . . . . . . . : Disabled
    Ethernet adapter vEthernet (vNIC-LiveMig):
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #4
       Physical Address. . . . . . . . . : 00-1D-D8-B7-1C-15
       DHCP Enabled. . . . . . . . . . . : No
       Autoconfiguration Enabled . . . . : Yes
       Link-local IPv6 Address . . . . . : fe80::f884:a35d:aa43:720e%33(Preferred)
       IPv4 Address. . . . . . . . . . . : 192.168.109.31(Preferred)
       Subnet Mask . . . . . . . . . . . : 255.255.255.0
       Default Gateway . . . . . . . . . :
       DHCPv6 IAID . . . . . . . . . . . : 1358962136
       DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-F2-58-CA-14-FE-B5-CA-35-6E
       DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1
                                           fec0:0:0:ffff::2%1
                                           fec0:0:0:ffff::3%1
       NetBIOS over Tcpip. . . . . . . . : Disabled
    Ethernet adapter BC-PCI3 - iSCSI1:
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : Broadcom BCM57711 NetXtreme II 10 GigE (N
    DIS VBD Client) #49
       Physical Address. . . . . . . . . : 00-0A-F7-2E-B5-3C
       DHCP Enabled. . . . . . . . . . . : No
       Autoconfiguration Enabled . . . . : Yes
       IPv4 Address. . . . . . . . . . . : 192.168.107.22(Preferred)
       Subnet Mask . . . . . . . . . . . : 255.255.255.0
       Default Gateway . . . . . . . . . :
       NetBIOS over Tcpip. . . . . . . . : Disabled
    Ethernet adapter BC-PCI4 - iSCSI2:
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : Broadcom BCM57711 NetXtreme II 10 GigE (N
    DIS VBD Client) #50
       Physical Address. . . . . . . . . : 00-0A-F7-2E-B5-3E
       DHCP Enabled. . . . . . . . . . . : No
       Autoconfiguration Enabled . . . . : Yes
       IPv4 Address. . . . . . . . . . . : 192.168.107.23(Preferred)
       Subnet Mask . . . . . . . . . . . : 255.255.255.0
       Default Gateway . . . . . . . . . :
       NetBIOS over Tcpip. . . . . . . . : Disabled
    Windows IP Configuration
       Host Name . . . . . . . . . . . . : DRHost3
       Primary Dns Suffix  . . . . . . . : m1ad.don-nan.biz
       Node Type . . . . . . . . . . . . : Hybrid
       IP Routing Enabled. . . . . . . . : No
       WINS Proxy Enabled. . . . . . . . : No
       DNS Suffix Search List. . . . . . : m1ad.don-nan.biz
    Ethernet adapter vEthernet (VM Public Network):
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #2
       Physical Address. . . . . . . . . : D0-67-E5-FB-A2-3F
       DHCP Enabled. . . . . . . . . . . : No
       Autoconfiguration Enabled . . . . : Yes
       Link-local IPv6 Address . . . . . : fe80::9928:4d4f:4862:2ecd%31(Preferred)
       IPv4 Address. . . . . . . . . . . : 192.168.9.115(Preferred)
       Subnet Mask . . . . . . . . . . . : 255.255.255.0
       IPv4 Address. . . . . . . . . . . : 192.168.9.119(Preferred)
       Subnet Mask . . . . . . . . . . . : 255.255.255.0
       Default Gateway . . . . . . . . . : 192.168.9.1
       DHCPv6 IAID . . . . . . . . . . . : 1104177125
       DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-F2-3C-E0-D0-67-E5-FB-A2-43
       DNS Servers . . . . . . . . . . . : 192.168.9.21
                                           192.168.9.23
       NetBIOS over Tcpip. . . . . . . . : Enabled
    Ethernet adapter vEthernet (vNIC-ClusterPriv):
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #3
       Physical Address. . . . . . . . . : 00-1D-D8-B7-1C-18
       DHCP Enabled. . . . . . . . . . . : No
       Autoconfiguration Enabled . . . . : Yes
       Link-local IPv6 Address . . . . . : fe80::3d99:312c:8f31:6411%32(Preferred)
       IPv4 Address. . . . . . . . . . . : 192.168.108.33(Preferred)
       Subnet Mask . . . . . . . . . . . : 255.255.255.0
       Default Gateway . . . . . . . . . :
       DHCPv6 IAID . . . . . . . . . . . : 1258298840
       DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-F2-3C-E0-D0-67-E5-FB-A2-43
       DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1
                                           fec0:0:0:ffff::2%1
                                           fec0:0:0:ffff::3%1
       NetBIOS over Tcpip. . . . . . . . : Disabled
    Ethernet adapter vEthernet (vNIC-LiveMig):
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #4
       Physical Address. . . . . . . . . : 00-1D-D8-B7-1C-19
       DHCP Enabled. . . . . . . . . . . : No
       Autoconfiguration Enabled . . . . : Yes
       Link-local IPv6 Address . . . . . : fe80::d859:b18a:71d6:8cef%33(Preferred)
       IPv4 Address. . . . . . . . . . . : 192.168.109.33(Preferred)
       Subnet Mask . . . . . . . . . . . : 255.255.255.0
       Default Gateway . . . . . . . . . :
       DHCPv6 IAID . . . . . . . . . . . : 1358962136
       DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-F2-3C-E0-D0-67-E5-FB-A2-43
       DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1
                                           fec0:0:0:ffff::2%1
                                           fec0:0:0:ffff::3%1
       NetBIOS over Tcpip. . . . . . . . : Disabled
    Ethernet adapter BC-PCI3-iSCSI1:
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : Broadcom BCM57711 NetXtreme II 10 GigE (N
    DIS VBD Client) #49
       Physical Address. . . . . . . . . : 00-0A-F7-2E-B5-60
       DHCP Enabled. . . . . . . . . . . : No
       Autoconfiguration Enabled . . . . : Yes
       IPv4 Address. . . . . . . . . . . : 192.168.107.26(Preferred)
       Subnet Mask . . . . . . . . . . . : 255.255.255.0
       Default Gateway . . . . . . . . . :
       NetBIOS over Tcpip. . . . . . . . : Disabled
    Ethernet adapter BC-PCI4-iSCSI2:
       Connection-specific DNS Suffix  . :
       Description . . . . . . . . . . . : Broadcom BCM57711 NetXtreme II 10 GigE (N
    DIS VBD Client) #50
       Physical Address. . . . . . . . . : 00-0A-F7-2E-B5-62
       DHCP Enabled. . . . . . . . . . . : No
       Autoconfiguration Enabled . . . . : Yes
       IPv4 Address. . . . . . . . . . . : 192.168.107.27(Preferred)
       Subnet Mask . . . . . . . . . . . : 255.255.255.0
       Default Gateway . . . . . . . . . :
       NetBIOS over Tcpip. . . . . . . . : Disabled
    Brian Gilmore Lead IT Technician Don-Nan Pump & Supply

  • When does capture/pump/replicat process do a checkpoint?

    Dear Guru,
    May someone help me to know when capture/pump/replicat process does a checkpoint? There are 2 things which made me confused.
    - CHECKPOINTSECS parameter: There is no setting in the parameter file, that means it will be set by default (10s)
    - One document says that checkpoints are based on transaction boundaries, that means checkpoint will happened by the end of each transaction
    In my source db, there was a long running transaction lasting for 1 hour and already committed and written to the local trail files. Then the pump process have started to read the trail files to pump to target. My concern is when the capture/pump/replicat process do a checkpoint? 10s or 1 hour?
    One thing is I often see the value of "Time since chkpt" in the output of "Info all" command is more than 10s. If they do a checkpoint every 10s, how could the value of "Time since chkpt" be more than 10s? I think that maybe the term "chkpt" of "Time since chkpt" is of database transaction, not of GoldenGate. However I'm not sure. It made me really confused.
    Here is the output of "Info all" command:
    Program Status Group Lag Time Since Chkpt
    EXTRACT RUNNING PUMP6 00:17:07 00:04:29
    My system profile:
    Source:
    OS: OEL5 64bit
    DB: Oracle 11gR2 (11.2.0.2)
    GG: 11gR1 (11.1)
    Target:
    OS: Solaris 10
    DB: Oracle 11gR2 (11.2.0.3)
    GG: 11gR1 (11.1)
    Thanks for your help in advance.
    Hieu
    Edited by: 972362 on Nov 21, 2012 2:36 AM

    CHECKPOINTSECS defines how often an extract makes its regular checkpoints. An extract will make a checkpoint every 10 seconds if this value is set to 10 irrespective of a transaction that is running for 1 hour. Extract also writes commit checkpoints when a transaction is committed.
    In the example where your Pump is showing time since checkpoint as greater than 10 could have a problem at the network level or issues with the remote trail..
    that is my observation as I had an extract that was being shown as RUNNING but time since last checkpoint was in hours and when i checked the report file, it had an error.

Maybe you are looking for