Validation on commit

Guys,
I have a af:table. User create rows, input values etc.. but the validation should be fired only on 'commit'. also invalid rows should be marked by some means.
how this can be acheived?
Thanks in advance.

Dev wrote:
How can i highlight the row (or a attribute on the specific row) which is invalid?Hi Dev,
You are taking the topic to different direction i would say don't hi-jack the thread and open a separate one for your question. if the above answers resolved your topic then close the thread
Thanks,
Zeeshan

Similar Messages

  • How to defer the primary key validation to commit time

    Hi
    Is it possible to defer the primary key validation to commit time? I don't know why the framework checks for the unique key constraint immediately after inserting the row and before committing it. This causes "Too many objects match the primary key oracle.jbo.Key[null]" error if the user presses the create new record button multiple times before filling and saving the previous records.
    Thanks,
    Ferez

    Dear M.Jabr,
    Many thanks for your reply. I have access to the database but I prefer to use an ADF workaround to this problem rather than a DB workaround. I am not sure but I think that there should be a way to defer or disable primary key constraint in ADF.
    Anyway, I tried to make the primary key constraint DEFERRABLE in DB using PL/SQL developer but an error occurred (the name is used by another object) which I don't know why.
    Thanks,
    Ferez

  • Firing validation on commit

    Guys,
    I have a this following scenario.
    I have a table. Users can create rows/edit rows inline. But the validation needs to be done only when the user click on 'save' (commit). Also invalid rows/attributes should be highlighted in red..
    How this can be acheived.

    Try setting
    <pagedefinition xmlns="http://xmlns.oracle.com/adfm/uimodel">
    version="11.1.1.54.7" id="CarDecorsHomePageDef"
    Package="com.mpapana.cardecors.ui.page" SkipValidation="true">
    </pagedefinition>http://www.adftips.com/2009/09/how-to-skip-entity-level-validations.html

  • Help: Data validation before commit or move to next record

    The form that I am working on allows data insert. It needs to have a feature to inform the end user for incomplete field(s) before commit_form or move to next record for updating. Please let me know how you would do it.
    Thank you,
    Jimmy

    To prevent cursor movement out of a field in a When-validate trigger, all you do is    Raise Form_trigger_failure;However, if the field is null and the user does not enter anything while tabbing through, or just clicking in then clicking somewhere else, the When-validate-item trigger does not run. You have to specifically check if the field is null in the When-validate-record trigger.

  • SQL LOADER  message : Point de validation COMMIT atteint - nombre d'enreg

    Hello
    I am loading a csv file into an oracle table via SQL LOADER
    Each time i invoke it i get the folowing message :
    SQL LOADER message : Point de validation <COMMIT> atteint - nombre d'enregis. logiques 52. (i am french version of apex)
    And the table is left empty.
    The translation of the above message is :
    SQL LOADER Validation Point <COMMIT> reached. number of logical records 52
    What doese it mean ?
    Thanks in advance.

    Yes
    my table is empty and
    and this is my control file :
    load data
    infile 'i:\csvs\mvh051.csv'
    into table mvhist051
    fields terminated by ';' optionally enclosed by '"'
    (ETABLI,NUPIECE,NUMLIGNE,DATEOP,JOURNAL,COMPTE,CODEN,LIBELLE,MONTANT,SENS,DATEVALEUR,OPERATION,NPIECE,CORIG,SOLDE,MVAPUR)
    hear is a sample of bide file :
    5100;2052;1;01/01/2005;ARB;1120060;0;report de solde 2004;6122867,27;D;01/01/2005;98;;1
    5100;2053;1;01/01/2005;ARB;1120070;0;report de solde 2004;223639,17;D;01/01/2005;98;;1
    5100;2054;1;01/01/2005;ARB;1261000;0;report de solde 2004;4680204207,00;D;01/01/2005;98;;1
    5100;2055;1;01/01/2005;ARB;1271100;0;report de solde 2004;81895715591,44;D;01/01/2005;98;;1
    ETABLI;NUPIECE;NUMLIGNE;DATEOP;JOURNAL;COMPTE;CODEN;LIBELLE;MONTANT;SENS;DATEVALEUR;OPERATION;NPIECE;CORIG;SOLDE;MVAPUR
    Please not that the last line is the heading columns names. I dont under stand why
    it is copied.

  • ADF BC/Faces - Order of validation / backing bean action problem

    Hello,
    I want the user to enter the same "operation date" into all adf faces table rows created in one batch (one transaction).
    So I removed operation date field from the table and added "unbound" date field above the table. The date field value is then copied into all new rows in background. This process is invoked from backing bean from Commit button actionListener method.
    The problem is the validation of the operation date in entity is executed before a new date value is copied into the date attribute (validation of model is in JSF lifecycle executed prior invoking actions in backing beans).
    It means the user can enter dead lock when he enters invalid date (for example date higher then a valid value).
    Then after commit:
    1. First validation is ok (wrong date value haven't been copied into model yet)
    2. backing bean copy action is executed - model now contains wrong date value
    3. Validation before commit isn't successfult - error message is displayed
    4. User corrects the date value and presses commit again but:
    5. First validation is not successful - model still contains recent wrong date value - error message is displayed again
    6.There is no way out from this situation
    I'm going to override lifecycle to be able to invoke copy method before validation cycle. Is this solution acceptable? Do you have any other suggestion?
    Thank you.
    Rado

    hi Rado
    Would it make sense to design your ADF BC View Objects in some kind of master-detail shape that fits your data?
    View Object : OperationMaster (OperationDateAttr, ...)
    View Object : OperationDetail (OperationAttr1, OperationAttr2, ...)
    View Link : OperationDetailForMasterVL (based on some attribute that keeps the detail rows together)
    You would need some Application Module method that does the "row batch setup", but it look like your already have something like this.
    A change to OperationMaster.OperationDateAttr could update all its detail rows date attribute.
    I think that building a UI on this would be less "view layer dependant".
    Just a suggestion.
    regards
    Jan Vervecken

  • WHEN_VALIDATE_ITEM triggering not preventing the data commit in oracle form

    Hi,
    In my custom tabular form, there are some fields(columns) which actually need to be validated for format HH:MI format. The user should ideally enter HH:MI format but I am putting some validation code to ensure format is maintained either HH:MI(08:30) or H:MI(8:30) where valid values of HH would be 00-23 & MI would be 00-59.
    As usual, I wrote my code in when validate item trigger of the specific item. Code for reference:
    ============================================================
    DECLARE
    v_start varchar2(2);
         v_mid varchar2(1);
         v_end varchar2(2);
    v_attribute3 varchar2(5);
    BEGIN
    v_attribute3 := :XXDWTC_EMP_SCH_DET.ATTRIBUTE3;
         BEGIN
         select decode(instr(v_attribute3,':'),2,substr(v_attribute3,1,1),3, decode(substr(v_attribute3,1,1),'0',substr(v_attribute3,2,1),substr(v_attribute3,1,2)))
         INTO v_start
    from dual;
         EXCEPTION
              WHEN OTHERS
              THEN v_start := NULL;
         END;
    BEGIN
    select substr(v_attribute3,-3,1)
    INTO v_mid
    from dual;
    EXCEPTION
              WHEN OTHERS
              THEN v_start := NULL;
    END;
    BEGIN
    select decode(substr(v_attribute3,4,1),'0',substr(v_attribute3,5,1),substr(v_attribute3,4,2))
    into v_end
    from dual;
    EXCEPTION
              WHEN OTHERS
              THEN v_start := NULL;
    END;
    IF v_attribute3 IS NOT NULL AND v_start IS NOT NULL AND v_mid IS NOT NULL AND v_end IS NOT NULL
         THEN
    IF to_number(v_start) < 0 OR to_number(v_start) > 23 OR TO_NUMBER(v_end) < 0 OR TO_NUMBER(v_end) > 59 OR v_mid <> ':'
         THEN
    --DBMS_OUTPUT.PUT_LINE('Invalid Time Format for In1');
              message('Invalid Time Format for In1');
         message(' ');
         RAISE FORM_TRIGGER_FAILURE;
    END IF;
         /* IF v_mid <> ':'
              THEN
              --DBMS_OUTPUT.PUT_LINE('Invalid Time Format for In1');
              message('Invalid Time Format for In1');
         message(' ');
         RAISE FORM_TRIGGER_FAILURE;
         END IF;
         IF (TO_NUMBER(v_end) < 0 OR TO_NUMBER(v_end) > 59)
              THEN
              --DBMS_OUTPUT.PUT_LINE('Invalid Time Format for In1');
              message('Invalid Time Format for In1');
         message(' ');
         RAISE FORM_TRIGGER_FAILURE;
         END IF;*/
    END IF;
    EXCEPTION
         WHEN OTHERS THEN NULL;
    END;
    ===========================================================================================
    The trigger is getting fired for invalid data(for error pops up) but issue is I am also able to save transaction. Is it like, when validate is only for the purpose of validation before commit and it will not directly impact in commit process.
    Please assist me on the same.
    Regards,
    Ad

    Hi;
    For your issue i suggest close your thread here as changing thread status to answered and move it to Forum Home » Application Development in PL/SQL » Forms which you can get more quick response
    Regard
    Helios

  • Problems with kismet

    Hi, I had problems starting kismet...
    sudo kismet
    Launching kismet_server: /usr/bin/kismet_server
    Will drop privs to koala (1000) gid 1000
    No specific sources given to be enabled, all will be enabled.
    Non-RFMon VAPs will be destroyed on multi-vap interfaces (ie, madwifi-ng)
    Enabling channel hopping.
    Enabling channel splitting.
    NOTICE: Disabling channel hopping, no enabled sources are able to change channel.
    Source 0 (madwifi): Enabling monitor mode for madwifi_b source interface wlan0 channel 6...
    ERROR: Unable to create VAP: Operation not supported
    ERROR: Unable to create monitor-mode VAP
    WARNING: wlan0 appears to not accept the Madwifi-NG controls. Will attempt to configure it as a standard Madwifi-old interface. If you are using madwifi-ng, be sure to set the source interface to the wifiX control interface, NOT athX
    FATAL: Failed to retrieve list of private ioctls 95:Operation not supported
    Done.
    Here is my kismet.conf
    # Kismet config file
    # Most of the "static" configs have been moved to here -- the command line
    # config was getting way too crowded and cryptic. We want functionality,
    # not continually reading --help!
    # Version of Kismet config
    version=2007.09.R1
    # Name of server (Purely for organizational purposes)
    servername=Kismet
    # User to setid to (should be your normal user)
    suiduser=koala
    # Do we try to put networkmanager to sleep? If you use NM, this is probably
    # what you want to do, so that it will leave the interfaces alone while
    # Kismet is using them. This requires DBus support!
    networkmanagersleep=true
    # Sources are defined as:
    # source=sourcetype,interface,name[,initialchannel]
    # Source types and required drivers are listed in the README under the
    # CAPTURE SOURCES section.
    # The initial channel is optional, if hopping is not enabled it can be used
    # to set the channel the interface listens on.
    # YOU MUST CHANGE THIS TO BE THE SOURCE YOU WANT TO USE
    source=madwifi_b,wlan0,madwifi
    # Comma-separated list of sources to enable. This is only needed if you defined
    # multiple sources and only want to enable some of them. By default, all defined
    # sources are enabled.
    # For example:
    # enablesources=prismsource,ciscosource
    # Automatically destroy VAPs on multi-vap interfaces (like madwifi-ng).
    # Madwifi-ng doesn't work in rfmon when non-rfmon VAPs are present, however
    # this is a fairly invasive change to the system so it CAN be disabled. Expect
    # things not to work in most cases if you do disable it, however.
    vapdestroy=true
    # Do we channelhop?
    channelhop=true
    # How many channels per second do we hop? (1-10)
    channelvelocity=5
    # By setting the dwell time for channel hopping we override the channelvelocity
    # setting above and dwell on each channel for the given number of seconds.
    #channeldwell=10
    # Do we split channels between cards on the same spectrum? This means if
    # multiple 802.11b capture sources are defined, they will be offset to cover
    # the most possible spectrum at a given time. This also controls splitting
    # fine-tuned sourcechannels lines which cover multiple interfaces (see below)
    channelsplit=true
    # Basic channel hopping control:
    # These define the channels the cards hop through for various frequency ranges
    # supported by Kismet. More finegrain control is available via the
    # "sourcechannels" configuration option.
    # Don't change the IEEE80211<x> identifiers or channel hopping won't work.
    # Users outside the US might want to use this list:
    # defaultchannels=IEEE80211b:1,7,13,2,8,3,14,9,4,10,5,11,6,12
    defaultchannels=IEEE80211b:1,6,11,2,7,3,8,4,9,5,10
    # 802.11g uses the same channels as 802.11b...
    defaultchannels=IEEE80211g:1,6,11,2,7,3,8,4,9,5,10
    # 802.11a channels are non-overlapping so sequential is fine. You may want to
    # adjust the list depending on the channels your card actually supports.
    # defaultchannels=IEEE80211a:36,40,44,48,52,56,60,64,100,104,108,112,116,120,124,128,132,136,140,149,153,157,161,184,188,192,196,200,204,208,212,216
    defaultchannels=IEEE80211a:36,40,44,48,52,56,60,64
    # Combo cards like Atheros use both 'a' and 'b/g' channels. Of course, you
    # can also explicitly override a given source. You can use the script
    # extras/listchan.pl to extract all the channels your card supports.
    defaultchannels=IEEE80211ab:1,6,11,2,7,3,8,4,9,5,10,36,40,44,48,52,56,60,64
    # Fine-tuning channel hopping control:
    # The sourcechannels option can be used to set the channel hopping for
    # specific interfaces, and to control what interfaces share a list of
    # channels for split hopping. This can also be used to easily lock
    # one card on a single channel while hopping with other cards.
    # Any card without a sourcechannel definition will use the standard hopping
    # list.
    # sourcechannels=sourcename[,sourcename]:ch1,ch2,ch3,...chN
    # ie, for us channels on the source 'wlanngsource' (same as normal channel
    # hopping behavior):
    # sourcechannels=wlanngsource:1,6,11,2,7,3,8,4,9,5,10
    # Given two capture sources, "wlannga" and "wlanngb", we want wlannga to stay
    # on channel 6 and wlanngb to hop normally. By not setting a sourcechannels
    # line for wlanngb, it will use the standard hopping.
    # sourcechannels=wlannga:6
    # To assign the same custom hop channel to multiple sources, or to split the
    # same custom hop channel over two sources (if splitchannels is true), list
    # them all on the same sourcechannels line:
    # sourcechannels=wlannga,wlanngb,wlanngc:1,6,11
    # Port to serve GUI data
    tcpport=2501
    # People allowed to connect, comma seperated IP addresses or network/mask
    # blocks. Netmasks can be expressed as dotted quad (/255.255.255.0) or as
    # numbers (/24)
    allowedhosts=127.0.0.1
    # Address to bind to. Should be an address already configured already on
    # this host, reverts to INADDR_ANY if specified incorrectly.
    bindaddress=127.0.0.1
    # Maximum number of concurrent GUI's
    maxclients=5
    # Do we have a GPS?
    gps=true
    # Host:port that GPSD is running on. This can be localhost OR remote!
    gpshost=localhost:2947
    # Do we lock the mode? This overrides coordinates of lock "0", which will
    # generate some bad information until you get a GPS lock, but it will
    # fix problems with GPS units with broken NMEA that report lock 0
    gpsmodelock=false
    # Packet filtering options:
    # filter_tracker - Packets filtered from the tracker are not processed or
    # recorded in any way.
    # filter_dump - Packets filtered at the dump level are tracked, displayed,
    # and written to the csv/xml/network/etc files, but not
    # recorded in the packet dump
    # filter_export - Controls what packets influence the exported CSV, network,
    # xml, gps, etc files.
    # All filtering options take arguments containing the type of address and
    # addresses to be filtered. Valid address types are 'ANY', 'BSSID',
    # 'SOURCE', and 'DEST'. Filtering can be inverted by the use of '!' before
    # the address. For example,
    # filter_tracker=ANY(!00:00:DE:AD:BE:EF)
    # has the same effect as the previous mac_filter config file option.
    # filter_tracker=...
    # filter_dump=...
    # filter_export=...
    # Alerts to be reported and the throttling rates.
    # alert=name,throttle/unit,burst/unit
    # The throttle/unit describes the number of alerts of this type that are
    # sent per time unit. Valid time units are second, minute, hour, and day.
    # Burst rates control the number of packets sent at a time
    # For example:
    # alert=FOO,10/min,5/sec
    # Would allow 5 alerts per second, and 10 alerts total per minute.
    # A throttle rate of 0 disables throttling of the alert.
    # See the README for a list of alert types.
    alert=NETSTUMBLER,10/min,1/sec
    alert=WELLENREITER,10/min,1/sec
    alert=LUCENTTEST,10/min,1/sec
    alert=DEAUTHFLOOD,10/min,2/sec
    alert=BCASTDISCON,10/min,2/sec
    alert=CHANCHANGE,5/min,1/sec
    alert=AIRJACKSSID,5/min,1/sec
    alert=PROBENOJOIN,10/min,1/sec
    alert=DISASSOCTRAFFIC,10/min,1/sec
    alert=NULLPROBERESP,10/min,1/sec
    alert=BSSTIMESTAMP,10/min,1/sec
    alert=MSFBCOMSSID,10/min,1/sec
    alert=LONGSSID,10/min,1/sec
    alert=MSFDLINKRATE,10/min,1/sec
    alert=MSFNETGEARBEACON,10/min,1/sec
    alert=DISCONCODEINVALID,10/min,1/sec
    alert=DEAUTHCODEINVALID,10/min,1/sec
    # Known WEP keys to decrypt, bssid,hexkey. This is only for networks where
    # the keys are already known, and it may impact throughput on slower hardware.
    # Multiple wepkey lines may be used for multiple BSSIDs.
    # wepkey=00:DE:AD:C0:DE:00,FEEDFACEDEADBEEF01020304050607080900
    # Is transmission of the keys to the client allowed? This may be a security
    # risk for some. If you disable this, you will not be able to query keys from
    # a client.
    allowkeytransmit=true
    # How often (in seconds) do we write all our data files (0 to disable)
    writeinterval=300
    # How old (and inactive) does a network need to be before we expire it?
    # This is really only good for limited ram environments where keeping a
    # total log of all networks is problematic. This is in seconds, and should
    # be set to a large value like 12 or 24 hours. This is intended for use
    # on stationary systems like an IDS
    # logexpiry=86400
    # Do we limit the number of networks we log? This is for low-ram situations
    # when tracking everything could lead to the system falling down. This
    # should be combined with a sane logexpiry value to flush out very old
    # inactive networks. This is mainly for stationary systems like an IDS.
    # limitnets=10000
    # Do we track IVs? this can help identify some attacks, but takes a LOT
    # of memory to do so on a busy network. If you have the RAM, by all
    # means turn it on.
    trackivs=false
    # Do we use sound?
    # Not to be confused with GUI sound parameter, this controls wether or not the
    # server itself will play sound. Primarily for headless or automated systems.
    sound=false
    # Path to sound player
    soundplay=/usr/bin/play
    # Optional parameters to pass to the player
    # soundopts=--volume=.3
    # New network found
    sound_new=/usr/share/kismet/wav/new_network.wav
    # Wepped new network
    # sound_new_wep=/usr/com/kismet/wav/new_wep_network.wav
    # Network traffic sound
    sound_traffic=/usr/share/kismet/wav/traffic.wav
    # Network junk traffic found
    sound_junktraffic=/usr/share/kismet/wav/junk_traffic.wav
    # GPS lock aquired sound
    # sound_gpslock=/usr/share/kismet/wav/foo.wav
    # GPS lock lost sound
    # sound_gpslost=/usr/share/kismet/wav/bar.wav
    # Alert sound
    sound_alert=/usr/share/kismet/wav/alert.wav
    # Does the server have speech? (Again, not to be confused with the GUI's speech)
    speech=false
    # Server's path to Festival
    festival=/usr/bin/festival
    # Are we using festival lite? If so, set the above "festival" path to also
    # point to the "flite" binary
    flite=false
    # Are we using Darwin speech?
    darwinsay=false
    # What voice do we use? (Currently only valid on Darwin)
    speech_voice=default
    # How do we speak? Valid options:
    # speech Normal speech
    # nato NATO spellings (alpha, bravo, charlie)
    # spell Spell the letters out (aye, bee, sea)
    speech_type=nato
    # speech_encrypted and speech_unencrypted - Speech templates
    # Similar to the logtemplate option, this lets you customize the speech output.
    # speech_encrypted is used for an encrypted network spoken string
    # speech_unencrypted is used for an unencrypted network spoken string
    # %b is replaced by the BSSID (MAC) of the network
    # %s is replaced by the SSID (name) of the network
    # %c is replaced by the CHANNEL of the network
    # %r is replaced by the MAX RATE of the network
    speech_encrypted=New network detected, s.s.i.d. %s, channel %c, network encrypted.
    speech_unencrypted=New network detected, s.s.i.d. %s, channel %c, network open.
    # Where do we get our manufacturer fingerprints from? Assumed to be in the
    # default config directory if an absolute path is not given.
    ap_manuf=ap_manuf
    client_manuf=client_manuf
    # Use metric measurements in the output?
    metric=false
    # Do we write waypoints for gpsdrive to load? Note: This is NOT related to
    # recent versions of GPSDrive's native support of Kismet.
    waypoints=false
    # GPSDrive waypoint file. This WILL be truncated.
    waypointdata=%h/.gpsdrive/way_kismet.txt
    # Do we want ESSID or BSSID as the waypoint name ?
    waypoint_essid=false
    # How many alerts do we backlog for new clients? Only change this if you have
    # a -very- low memory system and need those extra bytes, or if you have a high
    # memory system and a huge number of alert conditions.
    alertbacklog=50
    # File types to log, comma seperated
    # dump - raw packet dump
    # network - plaintext detected networks
    # csv - plaintext detected networks in CSV format
    # xml - XML formatted network and cisco log
    # weak - weak packets (in airsnort format)
    # cisco - cisco equipment CDP broadcasts
    # gps - gps coordinates
    logtypes=dump,network,csv,xml,weak,cisco,gps
    # Do we track probe responses and merge probe networks into their owners?
    # This isn't always desireable, depending on the type of monitoring you're
    # trying to do.
    trackprobenets=true
    # Do we log "noise" packets that we can't decipher? I tend to not, since
    # they don't have anything interesting at all in them.
    noiselog=false
    # Do we log corrupt packets? Corrupt packets have enough header information
    # to see what they are, but someting is wrong with them that prevents us from
    # completely dissecting them. Logging these is usually not a bad idea.
    corruptlog=true
    # Do we log beacon packets or do we filter them out of the dumpfile
    beaconlog=true
    # Do we log PHY layer packets or do we filter them out of the dumpfile
    phylog=true
    # Do we mangle packets if we can decrypt them or if they're fuzzy-detected
    mangledatalog=true
    # Do we do "fuzzy" crypt detection? (byte-based detection instead of 802.11
    # frame headers)
    # valid option: Comma seperated list of card types to perform fuzzy detection
    # on, or 'all'
    fuzzycrypt=wtapfile,wlanng,wlanng_legacy,wlanng_avs,hostap,wlanng_wext,ipw2200,ipw2915
    # Do we do forgiving fuzzy packet decoding? This lets us handle borked drivers
    # which don't indicate they're including FCS, and then do.
    fuzzydecode=wtapfile,radiotap_bsd_a,radiotap_bsd_g,radiotap_bsd_bg,radiotap_bsd_b,pcapfile
    # Do we use network-classifier fuzzy-crypt detection? This means we expect
    # packets that are associated with an encrypted network to be encrypted too,
    # and we process them by the same fuzzy compare.
    # This essentially replaces the fuzzycrypt per-source option.
    netfuzzycrypt=true
    # What type of dump do we generate?
    # valid option: "wiretap"
    dumptype=wiretap
    # Do we limit the size of dump logs? Sometimes ethereal can't handle big ones.
    # 0 = No limit
    # Anything else = Max number of packets to log to a single file before closing
    # and opening a new one.
    dumplimit=0
    # Do we write data packets to a FIFO for an external data-IDS (such as Snort)?
    # See the docs before enabling this.
    #fifo=/tmp/kismet_dump
    # Default log title
    logdefault=Kismet
    # logtemplate - Filename logging template.
    # This is, at first glance, really nasty and ugly, but you'll hardly ever
    # have to touch it so don't complain too much.
    # %n is replaced by the logging instance name
    # %d is replaced by the current date as Mon-DD-YYYY
    # %D is replaced by the current date as YYYYMMDD
    # %t is replaced by the starting log time
    # %i is replaced by the increment log in the case of multiple logs
    # %l is replaced by the log type (dump, status, crypt, etc)
    # %h is replaced by the home directory
    # ie, "netlogs/%n-%d-%i.dump" called with a logging name of "Pok" could expand
    # to something like "netlogs/Pok-Dec-20-01-1.dump" for the first instance and
    # "netlogs/Pok-Dec-20-01-2.%l" for the second logfile generated.
    # %h/netlots/%n-%d-%i.dump could expand to
    # /home/foo/netlogs/Pok-Dec-20-01-2.dump
    # Other possibilities: Sorting by directory
    # logtemplate=%l/%n-%d-%i
    # Would expand to, for example,
    # dump/Pok-Dec-20-01-1
    # crypt/Pok-Dec-20-01-1
    # and so on. The "dump", "crypt", etc, dirs must exist before kismet is run
    # in this case.
    logtemplate=%n-%d-%i.%l
    # Where do we store the pid file of the server?
    piddir=/var/run/
    # Where state info, etc, is stored. You shouldnt ever need to change this.
    # This is a directory.
    configdir=%h/.kismet/
    # cloaked SSID file. You shouldn't ever need to change this.
    ssidmap=ssid_map
    # Group map file. You shouldn't ever need to change this.
    groupmap=group_map
    # IP range map file. You shouldn't ever need to change this.
    ipmap=ip_map
    I'm just not sure if the "source:" is correct...:)
    Here is my lspci
    00:00.0 Host bridge: Intel Corporation Mobile 945GME Express Memory Controller Hub (rev 03)
    00:02.0 VGA compatible controller: Intel Corporation Mobile 945GME Express Integrated Graphics Controller (rev 03)
    00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03)
    00:1b.0 Audio device: Intel Corporation 82801G (ICH7 Family) High Definition Audio Controller (rev 02)
    00:1c.0 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 1 (rev 02)
    00:1c.1 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 2 (rev 02)
    00:1c.2 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 3 (rev 02)
    00:1c.3 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 4 (rev 02)
    00:1d.0 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 (rev 02)
    00:1d.1 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 (rev 02)
    00:1d.2 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 (rev 02)
    00:1d.3 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 (rev 02)
    00:1d.7 USB Controller: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller (rev 02)
    00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev e2)
    00:1f.0 ISA bridge: Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge (rev 02)
    00:1f.2 IDE interface: Intel Corporation 82801GBM/GHM (ICH7 Family) SATA IDE Controller (rev 02)
    00:1f.3 SMBus: Intel Corporation 82801G (ICH7 Family) SMBus Controller (rev 02)
    01:00.0 System peripheral: JMicron Technologies, Inc. SD/MMC Host Controller
    01:00.2 SD Host controller: JMicron Technologies, Inc. Standard SD Host Controller
    01:00.3 System peripheral: JMicron Technologies, Inc. MS Host Controller
    01:00.4 System peripheral: JMicron Technologies, Inc. xD Host Controller
    02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02)
    03:00.0 Ethernet controller: Atheros Communications Inc. AR242x 802.11abg Wireless PCI Express Adapter (rev 01)
    04:00.0 System peripheral: JMicron Technologies, Inc. SD/MMC Host Controller
    04:00.2 SD Host controller: JMicron Technologies, Inc. Standard SD Host Controller
    04:00.3 System peripheral: JMicron Technologies, Inc. MS Host Controller
    04:00.4 System peripheral: JMicron Technologies, Inc. xD Host Controller
    Thanks is advance guyz..:)

    [koala@myhost ~]$ lsmod
    Module Size Used by
    uhci_hcd 18764 0
    video 16208 0
    backlight 3652 1 video
    ath_pci 207800 0
    wlan 186612 1 ath_pci
    ath_hal 298208 1 ath_pci
    ath5k 88896 0
    Here is my lsmod...:)

  • Update Rule Error ( Invalid Data) while load 0VENDOR

    Hello All,
    I have attributes like  0POSTAL_CD of 10 Char (Postal Code) and 0SORTL 0f 10 Char (Sort Field) of Vendor. From yesterday onwards the Production master data loads for 0VENDOR started failing saying:
    Record 211 :0POSTALCODE : Data record 211 ('03 2VEN4000040 '): Version 'HK HONG KONG ' is not valid
    Record 136 :0SORTL : Data record 136 ('0002000175 '): Version 'SATTLER, I ' is not valid
    Solutions tried:
    Interstingly what we found is: when u edit the PSA data for the above records and again input the same values, the load runs fine.
    I found  this thread some what relating to my problem, but I could'nt get the solution.
    Please help to solve this issue which I am facing alone....... Thanks for your support

    Hi Ram,
    Really add more valid char is a possible option, but in your case I dont believe that was a good idea...
    I believe that this msg is an automatic master data validation.
    Ex:
    0SORT:  is not valid use comma ( , ) or sapce in this field
    0POSTALCODE: the value  'HK HONG KONG '  dont seems to be a correct value for a zip code
    I belive that this validation is not about allowed extra-char
    Check both InoObject and see wich datatype support.
    Hope this Help!

  • How to check a XMLTYPE table for corrupted (not invalid!) XML records ??

    Hello,
    I'd like to get help on the following issue, please. I need to check the XML data in a number of tables with XMLTYPE data type. Some of the data is corrupted but in terms not that the XML format is wrong but there is no XML at all in the fields but just only, for example, control characters (no tags, no anything, just corrupted data).
    So, I have made a PL/SQL procedure cursor to get all the tables from a list, and then another cursor inside to browse each table for corrupted records. But can you help me how to check this? Any of the XML functions like for example: XMLIsValid, isFragment(), getrootelement(), etc. return to me an Oracle error "ORA-31011 XML parsing failed" and the procedure gets stuck on the first found bad record. But I need to continue and check all of them. Is it possible to get the ORA-31011 error with EXCEPTION, write to a logging table the corrupted record ID, and then continue?
    Here is my simple procedure:
    CREATE OR REPLACE PROCEDURE CHECKXML (v_schema in VARCHAR2)
    IS
         v_Message     VARCHAR2(254);
         sql_stmt1     VARCHAR2(1000);
         sql_stmt2     VARCHAR2(1000);
         c1           SYS_REFCURSOR;
         c2           SYS_REFCURSOR;
         cur_tab          varchar2(100);
         cur_appl     varchar2(100);
         cur_rec     varchar2(200);
         valid_flag     number;
         criteria     VARCHAR2(20);
         tab1          VARCHAR2(20);
         tab2          VARCHAR2(20);
    BEGIN
         criteria := 'XMLTYPE';
         sql_stmt1 := 'select id, path from ' || v_schema || '.stubfiles where type=:bcriteria';
         open c1 for sql_stmt1 using criteria;
         loop
         begin
              fetch c1 into cur_tab, cur_appl;
              exit when c1%notfound;
                   insert into system.log_table values (sysdate, v_schema, 'next table', 'id ' || cur_tab, 'appl ' || cur_appl, '');
              sql_stmt2 := 'select t.recid, t.xmlrecord.isFragment() from ' || v_schema || '.' || cur_tab || ' t';
              open c2 for sql_stmt2;
              loop
              begin
              fetch c2 into cur_rec, valid_flag;
                   exit when c2%notfound;
                   insert into system.log_table values (sysdate, v_Schema, 'next record', 'id ' || cur_tab, 'recid ' || cur_rec, 'valid ' || valid_flag);
                   commit;
              EXCEPTION
                   WHEN OTHERS THEN v_Message := sqlerrm;
                   dbms_output.put_line('Error for ' || cur_tab);
                   dbms_output.put_line('-' || v_Message);
                   insert into system.log_table values (sysdate, cur_tab, 'id err' || c_Row.ID,'appl err' || c_Row.path, v_Message,'');
              end;
              end loop;
              close c2;
              commit;
         end;
         end loop;
         close c1;
         commit;
    END CHECKXML;
    Thanks in advance
    Evgeni

    Hi
    Why do you use a GTT? Just keep your xml in memory...
    HTH
    Chris

  • Performance issue (Oracle 10.2.0.3.0)

    Hi All,
    I have written the following procedure. But It's taking more than 5 hours for execution .
    I have created Index as follows.
    Create Index to speed up the research of the first code with sdn 'null' :
    CREATE INDEX TMP.IDX_RED_COAT_2 ON TMP.RED_COAT
    (CODE_YEAR, SUBSTR("CODE",1,2), NVL("SDN",'0'))
    NOLOGGING
    TABLESPACE TPOCLIENT_INDEX_5M_01
    PCTFREE    0
    INITRANS   2
    MAXTRANS   255
    STORAGE    (
                INITIAL          237M
                NEXT             1M
                MINEXTENTS       1
                MAXEXTENTS       UNLIMITED
                PCTINCREASE      0
                BUFFER_POOL      DEFAULT
    NOPARALLEL;and the procedure is
    CREATE OR REPLACE PROCEDURE TMP.pr_delight
    AS
       v_promo       red_coat.code%TYPE;
       v_date         DATE;
       CURSOR c1 (v_date DATE)
       IS
          SELECT DISTINCT ref_v.sdn sdn,
                          ref_v.recharge_type recharge_type,
                          ref_v.REFERENCE REFERENCE,
                          icat.langcode lang_code
                     FROM tmp.DEF_view ref_v,
                          inf_tmp icat
                    WHERE ref_v.sdn = icat.cardnum
                      AND ref_v.currency IN (
                                   SELECT emp_id_d
                                     FROM tmp.emp
                                    WHERE emp_id_h = 'CURRENCY'
                                          AND emp_txt = 'EURO')
                      AND ref_v.recharge_amount >= 100
                      AND ref_v.date_exec > v_date
                      AND ref_v.sdn NOT IN (
                             SELECT gprs_sdn
                               FROM stage.ppa_gprs
                              WHERE ppa_gprs.prom_idct =
                                       (SELECT emp_txt
                                          FROM tmp.emp
                                         WHERE emp_id_h = 'LOT'
                                           AND emp_id= 'LOT_ID'))
                      AND icat.profiled NOT IN ('KADOR', 'DINHG');
              rec   c1%ROWTYPE;
    BEGIN
       v_promo := NULL;
       v_date := NULL;
       SELECT TO_DATE (emp_txt, 'DD-MON-YYYY')
         INTO v_date
         FROM tmp.emp
        WHERE emp_id_h = 'LOT'
         AND emp_id = 'LAST_DATE'
         AND emp_id_t = 'D';
        OPEN c1 (v_date);
       LOOP
          FETCH c1 INTO rec;
          EXIT WHEN c1%NOTFOUND;
          SELECT code
            INTO v_promo
            FROM tmp.red_coat
           WHERE SUBSTR (code, 1, 2) = TO_CHAR (SYSDATE, 'MM')
             AND code_year = TO_CHAR (SYSDATE, 'YYYY')
             AND nvl(sdn,0) =0
             AND ROWNUM = 1;
          UPDATE red_coat
             SET sdn = SUBSTR (rec.sdn, 3),
                 REFERENCE = rec.REFERENCE,
                 recharge_type = rec.recharge_type,
                 assign_date = TRUNC (SYSDATE),
                 lang_code = rec.lang_code
           WHERE code = v_promo;
          COMMIT;
       END LOOP;
       UPDATE tmp.emp
          SET emp_txt = TO_CHAR (SYSDATE, 'DD-MON-YYYY')
        WHERE emp_id_h = 'LOT'
         AND emp_id = 'LAST_DATE'
         AND emp_id_t = 'D';
       COMMIT;
       CLOSE c1;
    EXCEPTION
       WHEN OTHERS
       THEN
          UPDATE tmp.emp
             SET emp_txt = TO_CHAR (SYSDATE, 'DD-MON-YYYY')
           WHERE emp_id_h = 'LOT'
            AND emp_id = 'LAST_DATE'
            AND emp_id_t = 'D';
          COMMIT;
          CLOSE c1;
    END pr_delight;can any one please look into this and correct the code and the way to improve the performance of this procedure.
    Thank you,

    I remember attending to this procedure performance problem earlier, a couple of weeks ago.
    I also remember suggesting you to go away with cursors and use one UPDATE statement with JOINS.
    There are many places where you can modify the code. Here are a few, apart from the single UPDATE statement suggestion.
    SELECT DISTINCT ref_v.sdn sdn,
                          ref_v.recharge_type recharge_type,
                          ref_v.REFERENCE REFERENCE,
                          icat.langcode lang_code
                     FROM tmp.DEF_view ref_v,
                          inf_tmp icat
                    WHERE ref_v.sdn = icat.cardnum
                    /* Use EXISTS in place of IN and with a correlated sub-query */
                      AND ref_v.currency IN (
                                   SELECT emp_id_d
                                     FROM tmp.emp
                                    WHERE emp_id_h = 'CURRENCY'
                                          AND emp_txt = 'EURO')
                      AND ref_v.recharge_amount >= 100
                      AND ref_v.date_exec > v_date
                      /* Use NOT EXISTS in place of NOT IN and with a correlated sub-query */
                      AND ref_v.sdn NOT IN (
                             SELECT gprs_sdn
                               FROM stage.ppa_gprs
                              WHERE ppa_gprs.prom_idct =
                      /* Do a JOIN with stage.ppa_gprs and tmp.emp instead of sub-query */               
                                       (SELECT emp_txt
                                          FROM tmp.emp
                                         WHERE emp_id_h = 'LOT'
                                           AND emp_id= 'LOT_ID'))
                      AND icat.profiled NOT IN ('KADOR', 'DINHG');You can make this SELECT part of the cursor SELECT or even get the v_date from first sub-query in the cursor SELECT.
    SELECT TO_DATE (emp_txt, 'DD-MON-YYYY')
         INTO v_date
         FROM tmp.emp
        WHERE emp_id_h = 'LOT'
         AND emp_id = 'LAST_DATE'
         AND emp_id_t = 'D';Why is this SELECT inside the LOOP? Bring it out of the loop or make it part of the UPDATE statement.
          SELECT code
            INTO v_promo
            FROM tmp.red_coat
           WHERE SUBSTR (code, 1, 2) = TO_CHAR (SYSDATE, 'MM')
             AND code_year = TO_CHAR (SYSDATE, 'YYYY')
             AND nvl(sdn,0) =0
             AND ROWNUM = 1;Why are you commiting inside the loop? How is your cursor valid after commit?
    Cheers
    Sarma.

  • ERPI Metadata load fails

    Hi,
    1. I am trying to load metadata to classic essbase application from peoplesoft using ERPI. The process fails with the below error.
    ERPI Process Start, Process ID: 19
    ERPI Logging Level: DEBUG (5)
    ERPI Log File: C:\Windows\TEMP\/aif_19.log
    Jython Version: 2.1
    Java Platform: java1.4.2_08
    COMM Dimension Pre-Processing - Multi Process Validation - START
    COMM Dimension Pre-Processing - Multi Process Validation - END
    Error in Start PS Load Dimension Members
    java.lang.Exception: The scenario did not end properly.
    When looked at ODI, the error is as below.
    java.sql.SQLException: java.sql.SQLException: ORA-00001: unique constraint (TST_ERP.AIF_DIM_MEMBERS_STG_U1) violated
    2. Chartfields in Peoplesoft does not have the hierarchy structure of parent child relationship. How will ERPi build the hierarchy in Hyperion if the chartfield do not contain the hierarchy structure.
    Regards,
    Ragav.

    Jeff,
    Thanks for response. I had opened an SR with oracle for this. It was a bug and the resolution is below. After this change, the process succeeds. However the metadata does not flow inside Hyperion Essbase application. Load method is classic. Do I need to configure any adapters for ERPI to work with Essbase?
    SYMPTOMS
    Issue Clarification
    When running a metadata rule within ERPi for a PeopleSoft Source System the metadata rule fails. The ODI Operator shows that the "PS_GL_LOAD_DIMENSION_MEMBERS" process is failing at step 20 PS Extract Dimension Members.
    java.sql.SQLException: ORA-00001: unique constraint (ERPINTEGRATOR.AIF_DIM_MEMBERS_STG_U1) violated
    CAUSE
    This caused a unique constraint exception as there were multiple records with the same "NAME" but with different EFFDT values in the AIF_PS_CF_METADATA table.
    SOLUTION
    Until the fix is included in the next service pack, the following update
    statement can be run *after* every execution of the Initialize Source System
    process on a PSFT source system:
    UPDATE AIF_PS_CF_METADATA
    SET EFFDT_FLAG = 'Y'
    WHERE SOURCE_SYSTEM_ID = ?
    AND EFFDT_FLAG = 'N'
    AND (
    (FIELDNAME = 'AFFILIATE_INTRA1' AND RECNAME = 'AFFINTRA1_VW')
    OR (FIELDNAME = 'AFFILIATE_INTRA2' AND RECNAME = 'AFFINTRA2_VW')
    Regards,
    Ragav.

  • How to cascade update

    how to cascade update?

    Sorry everyone for going regional with my Mexican fiend!
    In MEXMAN's first case, integrity on related data was already being enforced through a valid foreign key (TIZAYUCA.REGLAFAB_PRODUCTO_FK). However, the user wanted to update data included in the foreign key relationship. To acomplish this, I recommended to change the actual constraint state to defer validation until commit. Using the now familiar column names, the sentences are:
    ALTER TABLE TIZAYUCA.REGLA_FABRICACION
    DROP CONSTRAINT REGLAFAB_PRODUCTO_FK;
    ALTER TABLE TIZAYUCA.REGLA_FABRICACION
    ADD CONSTRAINT REGLAFAB_PRODUCTO_FK FOREIGN KEY (GRUPO, PRODUCTO)
    REFERENCES TIZAYUCA.PRODUCTO (GRUPO, CLAVE)
    INITIALLY DEFERRED;
    In MEXMAN's second case, a wholly new integrity needs to be enforced between two tables but some actual data present in both tables do not validate the desired constraint. The offending rows from the child table (TIZAYUCA.PRESENTACION) may be obtained executing
    SELECT *
    FROM tizayuca.presentacion t
    WHERE (t.grupo, t.producto) NOT IN
    (SELECT p.grupo, p.clave
    FROM tizayuca.producto p);
    MEXMAN has 3 choices:
    1. deleting the offending rows in the child table (TIZAYUCA.PRESENTACION) before executing
    alter table TIZAYUCA.PRESENTACION enable constraint PRES_PROD_FK;
    2. inserting the missing rows in the parent table (TIZAYUCA.PRODUCTO) before executing
    alter table TIZAYUCA.PRESENTACION enable constraint PRES_PROD_FK;
    3. making the new constraint ignore the present data and enforcing the relationship starting with new data:
    alter table TIZAYUCA.PRESENTACION
    add constraint PRES_PROD_FK foreign key (GRUPO, PRODUCTO)
    references TIZAYUCA.PRODUCTO (GRUPO, CLAVE)
    deferrable initially deferred
    enable novalidate;
    (drop the present PRES_PROD_FK constraint first)
    Please find more on the constraint topic here:
    http://download-east.oracle.com/docs/cd/B14117_01/server.101/b10759/clauses002.htm#CJAFFBAA
    and here:
    http://download-east.oracle.com/docs/cd/B14117_01/server.101/b10759/clauses002.htm#i1002273
    Best regards,
    Luis Morales,
    ConsiteNicaragua.com

  • How to minimize Client-Server Round-trip in ADF Faces application ?

    Hi All,
    We have just finished POC on our prototype of ADF Faces + ADF BC application. The POC Emphasizes on Bandwidth requirement.
    After receing the result from the communication provider, including : TCP packets send, Bytes sent from/to server and number of Client-Server Round-Trip.
    There are several part of the application that Must be tuned for the application to run on acceptable performance.
    Here is some page/ function that should be tuned :
    - First page, ADF Read Only Table with two images and some buttons, cause 5 round-trip
    - ADF Dialog Returning Value (as LOV), cause 4 Round-trips
    - On ADF Form, press Commit button, cause 3 Round-trips.
    So the question is :
    1) How to reduce round-trips on ADF Faces application ?
    2) How to minimize the bytes send from / To server on a specific user action ?
    Please give me some recommendation..
    Thank you very much,
    xtanto

    Hi Frank and Steve,
    Thank you for your reply.
    Yes Frank, what I mean by Round-Trip is the traffic between Client and the Server. And Yes, we will use VSAT where the latency is 1 - 1.5 second, so that round-trip matters significantly.
    What I will do is :
    - use minimal skin and No image at all
    - don't use Dialog for LOV because it requires AutoSubmit there.
    - Use 'Apply-Changes' button to do server-side validation before Commit.
    Then do the POC / testing again.
    Thank you,
    xtanto

  • Open Form Restriction

    Dear All Assalam-o-Alikum,
    I am using Open Form Procedure in MDI window, default oracle toolbar is also attach on every oracle form. i am facing the problem , when i open the two form and enter the data on both form one by one, after pressing the save button, Form save the data on both forms while it should save the data on current form. what should i do to do this.
    Thanks
    Best Regards
    Farrukh Shaikh

    change the Session parameter in OPEN_FORM to SESSION by default its NO_SESSION which means all ur forms will open in same session
    PROCEDURE OPEN_FORM
      (form_name      VARCHAR2,
       activate_mode  NUMBER,
       session_mode   NUMBER,
       data_mode      NUMBER,
       paramlist_id   PARAMLIST);
    session_mode     NO_SESSION  (The default.)  Specifies that the opened form should  share the same database session as the current form.  POST and COMMIT operations in any form will cause posting, validation, and commit processing to occur for all forms running in the same session.
    SESSION  Specifies that a new, separate database session should be created for the opened form.
    plz mark correct/helpful if it is
    Baig,
    [My Oracle Blog|http://baigsorcl.blogspot.com/]

Maybe you are looking for