Error in subset

Hello Experts,
"Error in substep" Message# RSBK231 during Data transfer Process                                                                   
0COSTCENTER -> ZCOSTTMP.
System DBI200 > RSA1 > Open Hub > Execute Data Transfer Process                                                                    
0COSTCENTER -> ZCOSTTMP.
Thanks in advance,
Arjun

Thanks

Similar Messages

  • Use of Semantic Groups in DTP

    Hi,
    can you pls help me in telling use of semantic groups in DTP.
    Thanks
    Jose

    Hi Jose,
    BI 7 DTP Semantic Groups:
    - the key of Error Stack;
    - subset of the key of the target object
    - defines which data should be detained from the update after the erroneous data record (for DSO objs)
    - semantic groups bundle records with same sem, group key into the same request
    See also these links:
    /thread/238438 [original link is broken]
    Regarding semantic groups in dtp
    Rgds,
    Colum
    Edited by: Colum Cronin on Apr 8, 2010 8:51 AM

  • "no data to export" error when trying to export a subset

    I have a large data set (>1 million points). I'm want to give the raw data for part of this log to my customer.
    I've created a subset in SignalExpress. I can drag the resulting subset onto a chart and view it OK, so I don't think there's a problem with the subset itself.
    However, I can't export the data from the subset using the export options available by righ-clicking in the logs pane. Using the XLS export I get an error message "no data to export" and nothing happens. If I try the ASCII export the action reports that it has completed OK but the .txt file it creates contains the header but no data points.
    I've created a smaller sample of data and performed the same steps and I get the same results, i.e. I don't seem to be able to export data from a log subset.
    Can anyone help?
    Mike

    Hi Mike,
    I have been able to capture a sample from
    SignalExpress 3.0, log it, right-click the log in the pane (as shown in
    attached image) , and successfully export to ASCII file which I have
    attached.
    Is this the method you are using? Have you
    tried plotting the data on the graph, and then right-clicking the graph
    then selecting Export To>>Microsoft Excel/Clipboard(text)?
    Regards,
    Steve
    Attachments:
    SigExpASCII.JPG ‏27 KB
    Voltage.txt ‏1014 KB

  • "generate subset" in script gives error if even number

    I am try to generate with a scipt using a PXI6551.  In the generate subset command if it is a EVEN number such as below (gree) subset(0,17) it gives the error below (red).  If it is an ODD number it works.  Is this a bug, normally you would need an even number of samples to be able to generate but with the script it seems to give the error if there is an even number?
    Thanks,
    Brian
    script myScript
       repeat 1
          Generate PRBS
       end repeat
       Generate PRBS subset(0,17) <-gives error   (0,18) does not give error
    script
    Error -1074115617 occurred at niHSDIO Write Script.vi
    Possible reason(s):
    Driver Status:  (Hex 0xBFFA4BDF) DAQmx Error -200032 occurred:
    Subset length specified is not valid.
    Change the subset length to be longer than zero samples and a multiple of the alignment quantum.
    Line Number: 5
    Position in Line: 18
    Subset Length: 17
    Alignment Quantum: 2
    Status Code: -200032
    Message Edited by BrianPack on 09-29-2005 04:29 PM

    Brian,
    The format (x,y) means y samples starting at position X and therefore when you specify (0,17) you would still generate 17 samples but starting at position 0. Whereas when specifying (1,17) you would generate STILL 17 samples but starting at position 1.  Therefore, in either case, you are still asking to generate an odd number of samples. Hope this helps.
    Ayman K

  • Error: PDF Embbed and PDF Subset

    Hi,
    i'm trying to print a PDF report in my web browser using the new PDF Subset feature of Reports Server 9i to subset a barcode font but i'm having the following problem:
    When the PDF is being generated, Adobe Acrobat 5 is showing the error (110) - Problem reading the document.
    My O.S is Red Hat Linux 7.3 and the Oracle 9ias R2 is installed with the patches p2703110 (9.0.2.2 Core Patchset) and p2842923.
    I'm tryng to view the report in a Windows 2000 machine and on the same Linux machine.
    Note that PDF Embbed works just fine, but ONLY on the first try. When I try to refresh the report on the browser, Reports Server shows the PDF Font Embbed Error.
    Does Anybody knows what is happening??
    Is this some kind of bug??
    Marcus Santos.
    ([email protected])

    Here is an earlier thread discussing this problem:
    http://www.adobeforums.com/webx?128@@.59b7927d
    There's some unrelated stuff in the middle, but relevant information at both ends.
    Peter

  • Error While Deploying A CMP Entity Bean With A Composite Primary Key

    Hello all,
    I have a problem deploying CMP Entity beans with composite primary keys. I have a CMP Entity Bean, which contains a composite primary key composed of two local stubs. If you know more about this please respond to my post on the EJB forum (subject: CMP Bean Local Stub as a Field of a Primary Key Class).
    In the mean time, can you please tell me what following error message means and how to resolve it? From what I understand it might be a problem with Sun ONE AS 7, but I would like to make sure it's not me doing something wrong.
    [05/Jan/2005:12:49:03] WARNING ( 1896):      Validation error in bean CustomerSubscription: The type of non-static field customer of the key class
    test.subscription.CustomerSubscriptionCMP_1530383317_JDOState$Oid must be primitive or must implement java.io.Serializable.
         Update the type of the key class field.
         Warning: All primary key columns in primary table CustomerSubscription of the bean corresponding to the generated class test.subscription.CustomerSubscriptionCMP_1530383317_JDOState must be mapped to key fields.
         Map the following primary key columns to key fields: CustomerSubscription.CustomerEmail,CustomerSubscription.SubscriptionType. If you already have fields mapped to these columns, verify that they are key fields.Is it enough that a primary key class be serializable or all fields have to implement Serializable or be a primitive?
    Please let me know if you need more information to answer my question.
    Thanks.
    Nikola

    Hi Nikola,
    There are several problems with your CMP bean.
    1. Fields of a Primary Key Class must be a subset of CMP fields, so yes, they must be either a primitive or a Serializable type.
    2. Sun Application Server does not support Primary Key fields of an arbitrary Serializable type (i.e. those that will be stored
    as BLOB in the database), but only primitives, Java wrappers, String, and Date/Time types.
    Do you try to use stubs instead of relationships or for some other reason?
    If it's the former - look at the CMR fields.
    If it's the latter, I suggest to store these fields as regular CMP fields and use some other value as the PK. If you prefer that
    the CMP container generates the PK values, use the Unknown
    PrimaryKey feature.
    Regards,
    -marina

  • HSDIO conditionally fetch hardware compare sample errors (script trigger to flag whether or not to wait for software trigger)

    I am moderately new to Labview and definitely new to the HSDIO platform, so my apologies if this is either impossible or silly!
    I am working on a system that consists of multiple PXI-6548 modules that are synchronized using T-CLK and I am using hardware compare.  The issue I have is that I need to be able to capture ALL the failing sample error locations from the hardware compare fetch VI... By ALL I mean potentially many, many more fails than the 4094 sample error depth present on the modules.
    My strategy has been to break up a large waveform into several subsets that are no larger than 4094 samples (to guarantee that I can't overflow the error FIFO) and then fetch the errors for each block.  After the fetch is complete I send a software reference trigger that is subsequently exported to a scriptTrigger that tells the hardware it is OK to proceed (I do this because my fetch routine is in a while loop and Labview says that the "repeated capbility has not yet been defined" if I try to use a software script trigger in a loop).
    This works fine, but it is also conceivable that I could have 0 errors in 4094 samples.  In such a case what I would like to do is to skip the fetching of the hardware compare errors (since there aren't any) and immediately begin the generation of the next block of the waveform.  That is, skip the time where I have to wait for a software trigger.
    I tried to do this by exporting the sample error event to a PFI and looping that PFI back in to generate a script trigger.  What I thought would happen was that the script trigger would get asserted (and stay asserted) if there was ever a sample error in a block, then I could clear the script trigger in my script.  However, in debug I ended up exporting this script trigger back out again and saw that it was only lasting for a few hundred nanoseconds (in a case where there was only 1 injected sample error)... The sample error event shows up as a 1-sample wide pulse.
    So, my question is this:  is there a way to set a flag to indicate that at least one sample error occurred in a given block  that will persist until I clear it in my script?  What I want to do is below...
    generate wfmA subset (0, 4094)
    if scriptTrigger1
      clear scriptTrigger1
      wait until scriptTrigger0
    end 
    clear scriptTrigger0
    generate wfmA subset (4094, 4094)
    I want scriptTrigger1 to be asserted only if there was a sample error in any block of 4094 and it needs to stay asserted until it is cleared in the script.  scriptTrigger0 is the software trigger that will be sent only if a fetch is performed.  Again, the goal being that if there were no sample errors in a block, the waiting for scriptTrigger0 will not occur.
    I am probably going about it all wrong (obviously since it doesn't work), so any help would be much appreciated!

    Please disregard most of my previous post... after some more debug work today I have been able to achieve the desired effect at slower frequencies.  I did straighten out my script too:
    generate wfmA
    if scriptTrigger1
      clear scriptTrigger0
      wait until scriptTrigger0
    end if
    generate wfmA
    scriptTrigger1 = sample error event flag
    scriptTrigger0 = software trigger (finished fetching error backlog in SW)
    However, I am still having a related issue.
    I am exporting the Sample Error Event to a PFI line, looping that back in on another PFI line, and having the incoming version of the Sample Error Event generate a script trigger.  My stimulus has a single injected sample error for debug. For additional debug I am exporting the script trigger to yet another PFI; I have the sample error event PFI and the script trigger PFI hooked up to a scope.
    If I run the sample clock rate less than ~133MHz everything works... I can see the sample error event pulse high for one clock period and the script trigger stays around until it is consumed by my script's if statement.
    Once I go faster than that I am seeing that the script trigger catches the sample error event occasionally.  The faster I go, the less often it is caught.  If I widen out the error to be 2 samples wide then it will work every time even at 200MHz.
    I have tried PFI0-3 and the PXI lines as the output terminal for the sample error event and they all have the same result (this implies the load from the scope isn't the cause).
    I don't know what else to try?  I can't over sample my waveform because I need to run a true 200MHz. I don't see anything that would give me any other control over the sample error event in terms of its pulsewidth or how to export it directly to a script trigger instead of how I'm doing it.
    Any other ideas?

  • "An error occurred while trying to save your photo library..."

    Installed iPhoto6.
    Started app. It said it would take a few seconds or minutes to upgrade.
    About 6 hours later after churning on thumbnails it is now claiming this error over and over with some time that goes by in between:
    "An error occurred while trying to save your photo library."
    "Some recent changes may be lost. Make sure your hard disk has enough space and that iPhoto is able to access the iPhoto Library folder."
    It says I have 22,759 photos in Library. The Library was a subset of my images, named "iPhoto Library 2005" and it was intentionally kept below the 25,000 limit of iPhoto5.
    Since Steve claims iPhoto6 can easily handle 250,000 photos, I clicked on the upgrade button with confidence. This now appears to be a mistake. Last night I was working on an iDVD for the local school and the majority of that work has simply vanished. There were 320 photos. I spent hours adding keywords, ranking, adjusting temperature, focus, exposure, etc.
    Of the 320, only 16 photos survived. They are 16 in a row from the middle of the roll near the end but not at the end. The roll name was lost. The name is now just "Roll 226" and none of the photos have keywords. I know at least one of the 16 should have the keyword "DUPE" and the ranking 2 STARS since it precedes a double that I ranked 2 during a slideshow, sorted by ranking, and dragged to the DUPE keyword. All 16 should have the ACROCK keyword which I initially assigned to all photos.
    Again, 304 of 320 have vanished and the 16 that remain have no keywords or rankings.
    I watched the slide show in iPhoto5 three times with different music candidates so I know the keywording and rankings evaporated in iPhoto6. I also clicked ACROCK and option clicked DUPE and POOR so that all of the music program photos except dupes and poor images would vanish. I only deleted one photo so there were not a lot of changes to the library in terms of photos in it. I cannot think of any reason why the 16 chosen survived. Only two were placed in a DVD dropzone and they are scattered amongst the 16 survivors.
    Also, the ACROCK and DUPE keywords are also gone while the POOR keyword added in August is in the keyword table just fine.
    I exported these 320 photos last night to iDVD and tried almost all of the different themes and I dragged about 10 different photos to the dropzones, hit play, hit motion, etc.
    It seems very strange that 16 photos from the middle of the pack would survive.
    It seems extra odd that iPhoto5 and iDVD treated these as "established" photos, not photos hanging in the wings somewhere ready to vanish in an upgrade.
    I gracefully saved and quit iDVD and iPhoto5 and there were no crashes and nothing strange happened while making the DVD. I was hoping today to try out the new themes and maybe find one that is just right for the kids. Instead, about 300 of 320 photos of the kids went bye-bye along with all the work on the photos!
    I'm nervous that many more photos are missing but I will need to do a ton of checking to see what else has been deleted by the upgrade.
    The user message that iPhoto6 presents is nearly useless and strange. It offers little if anything to go on in terms of a repair procedure. After saying an error has occurred, it says some recent changes could be lost but doesn't expand at all on what they may be or why.
    From my experience, recently added photos, keywords, keyword assignments, rankings and maybe more can all be lost. To compound matters, before the upgrade it says that you cannot go back to the prior (working!) library in iPhoto5. Beware!
    It finishes by suggesting two vague steps to take. The first is to make sure your disk has "enough" space. How much is "enough"? It doesn't say something like X GB available, Y GB needed. A user would have no clue about how much space to free up. I have over 17 GB and assumed that would be enough. I sure hope it doesn't require double the space of the existing library. I would hope it has a smarter algorithm than that since 23,000 photos at about 3 MB each maps to about 70 GB. That's a lot of "temporary" space to require to perform an upgrade. Since the upgrade/error message provide no details (unlike package installers which supposedly tell you how much space you need), users like me would have no clue if 70 GB is plenty or not enough.
    I found 24 GB secretly used by iPhoto 5 for an iPod Photo Cache and a tech bulletin saying this just grows and grows without control and it can be deleted SO I wouldn't put huge hidden disk usage past iPhoto6.
    The last none-to-helpful suggestion is that a user make sure that iPhoto is able to access the iPhoto Library folder. Huh?!??!?
    Wouldn't this be a great thing to confirm via software before setting about on 6+ hours of work, especially if there is an error message hard-coded in the software confirming that Apple knows the result may be losing valuable customer photos and hours of work on enhancing, keywording, creating DVDs, etc.?
    Of course. This appears negligent. To know that customer file loss will likely result but not checking these things out in advance before taking that risk seems like putting the cart before the horse at customers' expense & pain and certain loss of irreplaceable family photos (for the, say, 80%+ of iPhoto users that "plan" to get a backup routine established?).
    I got burned in the iPhoto5 upgrade as well, losing photos, so I was sure to have a backup so it wouldn't happen again. Most I fear won't be so fortunate so be advised of the risk and work with copies of your photos only.
    Also, why would iPhoto5 have all the permissions it needs to store, edit, tag and otherwise work with 320 photos added last night? Why would version 5 have plenty of permissions to use the folders and the photos in them. Why would iPhoto 6 suddenly lose those permissions or not have enough? This hardly makes any sense. The only way the permissions were set, the iPhoto libraries got created and the albums got created was by dragging a folder of images onto the albums pane in iPhoto5. So, iPhoto5 set all the permissions, everything is under one user account and there should be no surprise permissions for iPhoto6 to deal with.
    It seems unbelievable that iPhoto 6 couldn't do what iPhoto 5 was doing. I never changed permissions and never touched files in the iPhoto 5 structure.
    I can only guess that recently added keywords, recently added photos and recently added rankings and keyword taggings linger in some at-risk staging area that a version 6 upgrade doesn't know how to deal with. This seems like a lame way to code this app so I don't consider this a good guess but what else could it be?
    Anyone else have a better hunch at a root cause?
    Anyone else have an idea of a cure?
    Anyone else run into this?
    It sure doesn't look like there was a test case for having near 25,000 photos (an iPhoto 5 documented requirement), working with the latest roll and then doing a version 6 upgrade because it failed miserably--user data loss--without any special effort to find defects. In a former life I tested software and released software was much more difficult to break than this.
    I would appreciate any other user experiences/solutions related to this problem. I am reluctant to import another 22,000 photos from iPhoto Library 2004 only to find more photos go poof without any rhyme or reason.
    Thanks in advance for your help and I hope this alerts others to a potential for valuable photo loss (i.e., back up before you upgrade, verify iPhoto Library permissions--whatever that means--and have scads of disk space available assuming there's any truth to the error message!).
    --Sam
    G5 Dual 2.7 GHz 2 GB DDR SDRAM   Mac OS X (10.4.4)  

    I'll check if I have copies of copies of copies. This could take a while due to the number of files.
    I picked up a brand new 300 GB Western Digital hard drive thinking 20GB or so was not enough for temp files for upgrading and/or importing.
    I formatted it as a mac os extended, journaled drive, not case sensitive, one full-size partition. I started iPhoto6 with the option key. I had a CF in my other WD 300 GB (with media reader). iPhoto6 asked about the photos on the card. I told it not to download those (again). Because I was holding the option key down, iPhoto6 asked if I wanted to select a different library or create one. I created one on the brand new, 100% empty 300 GB drive. I'm not sure if/where iPhoto6 creates temporary/cache files (like iPhoto iPod Cache folder) as it sets up a new library. I was hoping the empty 300 GB drive would give it ample room to load an existing iPhoto5 library.
    When I dragged an old iPhoto5 Library onto the new iPhoto6 Library Album pane. I saw the "Importing..." start and it went on and on and on and... I let it run for HOURS and HOURS. When I checked around midnight, it had crashed. When I started it again, it said it had run across some huge number of stray photos.
    I have no idea why iPhoto6 would have trouble importing an iPhoto5 library that was below the 25,000 photo mark for 5. This is especially bizarre since iPhoto6 claims it supports 250,000 photos!
    I'll be trying to figure out if these stray photos got imported as dupes, if they lost iPhoto5 keywords, etc. I fear something different but equally bad happened since the number of photos it came up with was like 79,000--again many stray photos that didn't make it into the first import for no known reason. Since libraries in iPhoto5 were around 25,000 or less, I'm not sure what's happening.
    Maybe there's a cache or temp file still created on Macintosh HD and not on the empty new 300GB drive and a bug exists for imports when space on Mac HD is "low" (i.e., under 3.5 GB). Maybe it's seeing thumbs as files it should import? Maybe it's seeing Orig and Modified images as worthy of reimporting?
    I had told it not to import dupes but who knows if it goes by file name, checksum, byte-for-byte comparison, etc.
    I'm not sure if I can even pull up iPhoto5 and 6 at the same time now to scroll through the libraries to see if dupes, thumbs, original/modifies/etc. are showing up to boost the count from under 25K to 79K!
    Ideas? At least I know I'm not the only G5 dual user with this problem!
    --Sam
    G5 Dual 2.7 GHz 2 GB DDR SDRAM Mac OS X (10.4.4)
    G5 Dual 2.7 GHz 2 GB DDR SDRAM Mac OS X (10.4.4)

  • Error in Patching!

    Hi,
    I installed Oracle Application 11.5.10.2. I downloaded the 5903765 patch for Application.
    When i tried installing the patch it gave me an error saying:
    aiosp2() Error: failure in usdspn()
    Contents of error buffer are:
    "usdsop cannot create a new process
    Cause: usdsop encountered an error creating a new process. [Reason].
    Action: Check that your system had enough resources to start a new process. Cont
    act your system administrator to obtain more resou (RE"
    An error occurred while extracting files from library.
    Contents of the Log File:
    ==================
    ************* Start of AutoPatch session *************
    AutoPatch version: 11.5.0
    AutoPatch started at: Sat Dec 29 2007 20:33:13
    APPL_TOP is set to k:\oracle\visappl
    NLS_LANG value from the environment is : American_America.UTF8
    NLS_LANG value for this AD utility run is : AMERICAN_AMERICA.UTF8
    Please enter the batchsize [1000] : 1000
    Please enter the name of the Oracle Applications System that this
    APPL_TOP belongs to.
    The Applications System name must be unique across all Oracle
    Applications Systems at your site, must be from 1 to 30 characters
    long, may only contain alphanumeric and underscore characters,
    and must start with a letter.
    Sample Applications System names are: "prod", "test", "demo" and
    "Development_2".
    Applications System Name [VIS] : VIS *
    NOTE: If you do not currently have certain types of files installed
    in this APPL_TOP, you may not be able to perform certain tasks.
    Example 1: If you don't have files used for installing or upgrading
    the database installed in this area, you cannot install or upgrade
    the database from this APPL_TOP.
    Example 2: If you don't have forms files installed in this area, you cannot
    generate them or run them from this APPL_TOP.
    Example 3: If you don't have concurrent program files installed in this area,
    you cannot relink concurrent programs or generate reports from this APPL_TOP.
    Do you currently have files used for installing or upgrading the database
    installed in this APPL_TOP [YES] ? YES *
    Do you currently have Java and HTML files for HTML-based functionality
    installed in this APPL_TOP [YES] ? YES *
    Do you currently have Oracle Applications forms files installed
    in this APPL_TOP [YES] ? YES *
    Do you currently have concurrent program files installed
    in this APPL_TOP [YES] ? YES *
    Please enter the name Oracle Applications will use to identify this APPL_TOP.
    The APPL_TOP name you select must be unique within an Oracle Applications
    System, must be from 1 to 30 characters long, may only contain
    alphanumeric and underscore characters, and must start with a letter.
    Sample APPL_TOP Names are: "prod_all", "demo3_forms2", and "forms1".
    APPL_TOP Name [server] : server *
    You are about to apply a patch to the installation of Oracle Applications
    in your ORACLE database 'VIS'
    using ORACLE executables in
    'k:\oracle\visora\8.0.6'.
    Is this the correct database [Yes] ? Yes
    AutoPatch needs the password for your 'SYSTEM' ORACLE schema
    in order to determine your installation configuration.
    Enter the password for your 'SYSTEM' ORACLE schema: *****
    Connecting to SYSTEM......Connected successfully.
    The ORACLE username specified below for Application Object Library
    uniquely identifies your existing product group: APPLSYS
    Enter the ORACLE password of Application Object Library [APPS] : *****
    AutoPatch is verifying your username/password.
    Connecting to APPLSYS......Connected successfully.
    The status of various features in this run of AutoPatch is:
    <-Feature version in->
    Feature Active? APPLTOP Data model Flags
    CHECKFILE Yes 1 1 Y N N Y N Y
    PREREQ Yes 6 6 Y N N Y N Y
    CONCURRENT_SESSIONS No 2 2 Y Y N Y Y N
    PATCH_TIMING Yes 2 2 Y N N Y N Y
    PATCH_HIST_IN_DB Yes 6 6 Y N N Y N Y
    SCHEMA_SWAP Yes 1 1 Y N N Y Y Y
    Connecting to SYSTEM......Connected successfully.
    Connecting to APPLSYS......Connected successfully.
    Identifier for the current session is 25932
    Reading product information from file...
    Reading language and territory information from file...
    Reading language information from applUS.txt ...
    Reading database to see what industry is currently installed.
    Reading FND_LANGUAGES to see what is currently installed.
    Currently, the following language is installed:
    Code Language Status
    US American English Base
    Your base language will be AMERICAN.
    Setting up module information.
    Reading database for information about the modules.
    Saving module information.
    Reading database for information about the products.
    Connecting to SYSTEM......Connected successfully.
    Connecting to APPLSYS......Connected successfully.
    Connecting to SYSTEM......Connected successfully.
    Reading database for information about how products depend on each other.
    Connecting to APPLSYS......Connected successfully.
    Reading topfile.txt ...
    Saving product information.
    Connecting to SYSTEM......Connected successfully.
    Connecting to APPLSYS......Connected successfully.
    AD code level : [11i.AD.I.2]
    STRT_TASK: [AutoPatch startup after aimini] [] [Sat Dec 29 2007 20:34:22]
    Connecting to APPS......Connected successfully.
    STRT_TASK: [Validate schema passwords] [] [Sat Dec 29 2007 20:34:23]
    Connecting to APPLSYS......Connected successfully.
    Connecting to APPS......Connected successfully.
    STOP_TASK: [Validate schema passwords] [] [Sat Dec 29 2007 20:34:23]
    STRT_TASK: [Upload Patch History information from filesystem] [] [Sat Dec 29 2007 20:34:23]
    Trying to obtain a lock...
    About to attempt instantiating the current-view snapshot: Sat Dec 29 2007 20:34:24
    Attempting to instantiate the current-view snapshot...
    Instantiated bug-fix info from baseline info.
    Done attempting to instantiate the current-view snapshot: Sat Dec 29 2007 20:40:40
    **************** S T A R T O F U P L O A D ****************
    Start date: Sat Dec 29 2007 20:40:40
    0 "left over" javaupdates.txt files uploaded to DB: Sat Dec 29 2007 20:40:40
    0 patches uploaded from the ADPSV format patch history files: Sat Dec 29 2007 20:40:40
    Uploading information about files copied during the previous runs ...
    0 "left over" filescopied_<session_id>.txt files uploaded to DB: Sat Dec 29 2007 20:40:40
    ****************** E N D O F U P L O A D ******************
    End date: Sat Dec 29 2007 20:40:40
    STOP_TASK: [Upload Patch History information from filesystem] [] [Sat Dec 29 2007 20:40:40]
    Enter the directory where your Oracle Applications patch has been unloaded
    The default directory is [K:\oracle\visappl\ad\11.5.0\bin] : h:\oracle\p38\5903765\
    STOP_TASK: [AutoPatch startup after aimini] [] [Sat Dec 29 2007 20:47:51]
    Please enter the name of your AutoPatch driver file : u5903765.drv
    STRT_TASK: [Run a single patch driver file] [] [Sat Dec 29 2007 20:48:07]
    STRT_TASK: [Steps before copy portion] [] [Sat Dec 29 2007 20:48:07]
    STRT_TASK: [Initial driver processing steps] [] [Sat Dec 29 2007 20:48:07]
    Start time for driver file is: Sat Dec 29 2007 20:48:07
    STOP_TASK: [Initial driver processing steps] [] [Sat Dec 29 2007 20:48:07]
    STRT_TASK: [Get Oracle Applications Release and read driver file] [] [Sat Dec 29 2007 20:48:07]
    STRT_TASK: [Get Oracle Applications Release] [] [Sat Dec 29 2007 20:48:07]
    Getting Oracle Applications Release...
    Connecting to APPLSYS......Connected successfully.
    Current installed release is 11.5.10.2
    Connecting to APPS......Connected successfully.
    STOP_TASK: [Get Oracle Applications Release] [] [Sat Dec 29 2007 20:48:09]
    STRT_TASK: [Process patch driver file] [] [Sat Dec 29 2007 20:48:09]
    Reading patch driver file...
    STRT_TASK: [Parse and load patch driver file] [] [Sat Dec 29 2007 20:48:09]
    Parsing and loading patch driver file...
    Processing line 10000.
    Processing line 20000.
    Processing line 30000.
    Processing line 40000.
    Processing line 50000.
    Processing line 60000.
    Processing line 70000.
    73911 lines processed.
    STOP_TASK: [Parse and load patch driver file] [] [Sat Dec 29 2007 20:48:09]
    STRT_TASK: [Check patch integrity] [] [Sat Dec 29 2007 20:48:09]
    Not checking patch integrity as integrity checking flag is turned off.
    STOP_TASK: [Check patch integrity] [] [Sat Dec 29 2007 20:48:09]
    Successfully read patch driver file.
    STOP_TASK: [Process patch driver file] [] [Sat Dec 29 2007 20:48:09]
    STRT_TASK: [Other driver and release-related logic] [] [Sat Dec 29 2007 20:48:09]
    Determining target release...
    Current target release is 11.5.10.2
    STOP_TASK: [Other driver and release-related logic] [] [Sat Dec 29 2007 20:48:10]
    STOP_TASK: [Get Oracle Applications Release and read driver file] [] [Sat Dec 29 2007 20:48:10]
    STRT_TASK: [Prereq checking logic] [] [Sat Dec 29 2007 20:48:10]
    STOP_TASK: [Prereq checking logic] [] [Sat Dec 29 2007 20:48:10]
    STRT_TASK: [Ask translated patch question] [] [Sat Dec 29 2007 20:48:10]
    STOP_TASK: [Ask translated patch question] [] [Sat Dec 29 2007 20:48:10]
    STRT_TASK: [Determine bugs to apply] [] [Sat Dec 29 2007 20:48:10]
    Determining which bug fixes to apply...
    Turning off bug fixes for products not installed or shared in the database...
    Resetting bug actions for bug fixes we will not apply...
    Processing files for Oracle Government Applications...
    Done determining which bug fixes to apply.
    Log and Info File sync point:
    Sat Dec 29 2007 20:48:10
    Turning off actions that reference unrecognized products.
    Log and Info File sync point:
    Sat Dec 29 2007 20:48:10
    End of unrecognized products checking.
    STOP_TASK: [Determine bugs to apply] [] [Sat Dec 29 2007 20:48:10]
    STRT_TASK: [Process action options] [] [Sat Dec 29 2007 20:48:10]
    STOP_TASK: [Process action options] [] [Sat Dec 29 2007 20:48:10]
    STRT_TASK: [Modify actions for bootstrap mode] [] [Sat Dec 29 2007 20:48:10]
    STOP_TASK: [Modify actions for bootstrap mode] [] [Sat Dec 29 2007 20:48:10]
    STRT_TASK: [Ask for number of parallel workers] [] [Sat Dec 29 2007 20:48:10]
    AD utilities can support a maximum of 999 workers. Your
    current database configuration supports a maximum of 91 workers.
    Oracle recommends that you use between 2 and 4 workers.
    Enter the number of parallel workers [1] : 4
    STOP_TASK: [Ask for number of parallel workers] [] [Sat Dec 29 2007 20:48:44]
    AutoPatch will run in parallel mode.
    STOP_TASK: [Steps before copy portion] [] [Sat Dec 29 2007 20:48:44]
    STRT_TASK: [Copy portion steps] [] [Sat Dec 29 2007 20:48:44]
    STRT_TASK: [apply new applterr.txt] [] [Sat Dec 29 2007 20:48:44]
    Did not need to apply new applterr.txt.
    STOP_TASK: [apply new applterr.txt] [] [Sat Dec 29 2007 20:48:44]
    STRT_TASK: [AutoSplice applprod.txt and applUS.txt] [] [Sat Dec 29 2007 20:48:44]
    Applying new applprod.txt (if any)...
    Did not need to apply new applprod.txt.
    STOP_TASK: [AutoSplice applprod.txt and applUS.txt] [] [Sat Dec 29 2007 20:48:44]
    STRT_TASK: [Version checking for driver files] [] [Sat Dec 29 2007 20:48:44]
    Performing version checking for driver files...
    Log and Info File sync point:
    Sat Dec 29 2007 20:48:44
    AutoPatch found some files which it will not apply.
    These files are listed in the AutoPatch informational message file.
    STOP_TASK: [Version checking for driver files] [] [Sat Dec 29 2007 20:48:47]
    STRT_TASK: [Copy driver files] [] [Sat Dec 29 2007 20:48:47]
    Copying driver files into installation area...
    You are running admvcode
    Header information is:
    $Header: aiopatch.lc 115.44 2004/07/07 09:05:05 cbhati ship $
    Start of admvcode session
    Date/Time is Sat Dec 29 2007 20:48:47
    Filelist file is: k:\oracle\visappl\admin\VIS\out\copymast.txt
    Patch Character Set is: us7ascii
    On-site Character Set is: UTF8
    Log and Info File sync point:
    Sat Dec 29 2007 20:48:47
    Information about files copied to the APPL_TOP would be written to the
    informational message file.
    Character set 'US7ASCII' is a subset of character set 'UTF8'.
    No character set conversion is required.
    Copying files to APPL_TOP...
    0 directories created.
    54 files copied without character set conversion.
    0 files copied with successful character set conversion.
    54 files copied successfully.
    0 files had fatal errors.
    admvcode is exiting with status 0
    End of admvcode session
    Date/time is Sat Dec 29 2007 20:48:50
    Done copying driver files into installation area.
    STOP_TASK: [Copy driver files] [] [Sat Dec 29 2007 20:48:50]
    STRT_TASK: [Forcecopy driver files] [] [Sat Dec 29 2007 20:48:50]
    ForceCopying driver files into installation area...
    No driver files were selected for ForceCopying.
    STOP_TASK: [Forcecopy driver files] [] [Sat Dec 29 2007 20:48:50]
    STRT_TASK: [Screen out files not valid for this configuration] [] [Sat Dec 29 2007 20:48:50]
    Screening out files not valid for this installation...
    STOP_TASK: [Screen out files not valid for this configuration] [] [Sat Dec 29 2007 20:48:50]
    STRT_TASK: [Read file driver files to get list of valid files] [] [Sat Dec 29 2007 20:48:50]
    Determining valid on-site files...
    Setting JAVA_TOP to 'k:\oracle\viscomn\java'
    STOP_TASK: [Read file driver files to get list of valid files] [] [Sat Dec 29 2007 20:49:19]
    STRT_TASK: [Perform libout actions] [] [Sat Dec 29 2007 20:49:19]
    Extracting object modules from product libraries...
    An error occurred while extracting files from library.
    Continue as if it were successful [No] : No
    You should check the file
    k:\oracle\visappl\admin\VIS\log\patch_install_5903765.log
    for errors.
    ===============================================================
    Please help me...

    How do u relink executable file using adadmin.exe- Open a new DOS session
    - Source the environment file (Issue "envshell.cmd" under %APPL_TOP%)
    - run 'adadmin'
    - Select 'Maintain Applications Files menu'
    - Select 'Relink Applications programs'
    Enter list of products to link ('all' for all products) [all] : fnd
    Generate specific executables for each selected product [No] ? yes
    Enter executables to relink, or enter 'all' [all] : <Enter name of executable file>

  • An unexpected error has been detected by HotSpot Virtual Machine:

    Hi
    When I upload the xml file with dubug window.
    When I have debug then this message will come..
    So give me a solution..
    Here at below I have pasted files. in which I want to by pass the ldap..
    So give me suggestion what to do with that files.
    What's wrong with that file ?
    Jiten
    18 Oct 2007 17:02:09 0 INFO [main] security.SecurityUtil - SecurityUtil.login: Called with [email protected]; password=[C@1797795; sourceIpAddress=192.168.5.14; weblogin=false; realm=null
    18 Oct 2007 17:02:09 16    INFO  [main] security.SecurityUtil - login: start login: username: [email protected]
    18 Oct 2007 17:02:09 125 INFO [main] BOOT - -- boot log messages -->
    [BOOT] INFO: Loading OJB's properties: file:/C:/LMS/gsnx/deploy/webapp/gsnx.ear/webapp.war/WEB-INF/classes/OJB.properties
    [BOOT] WARN: Can't read logging properties file using path 'OJB-logging.properties', message is:
    C:\LMS\gsnx\OJB-logging.properties (The system cannot find the file specified)
    Will try to load logging properties from OJB.properties file
    [BOOT] INFO: Logging: Found 'log4j.properties' file, use class org.apache.ojb.broker.util.logging.Log4jLoggerImpl
    [BOOT] INFO: Log4J is already configured, will not search for log4j properties file
    18 Oct 2007 17:02:12 3282 INFO [main] security.SecurityUtil - SecurityUtil.login: Calling authentication with [email protected]; password=[C@1797795; realm=null
    # An unexpected error has been detected by HotSpot Virtual Machine:
    #  EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x6d85c14f, pid=4032, tid=3260
    # Java VM: Java HotSpot(TM) Client VM (1.5.0_11-b03 mixed mode)
    # Problematic frame:
    # V  [jvm.dll+0x11c14f]
    # An error report file with more information is saved as hs_err_pid4032.log
    # If you would like to submit a bug report, please visit:
    # http://java.sun.com/webapps/bugreport/crash.jsp
    Here :This is my login.config file..
    * Copyright (c) 2003-2005 E2open Inc. All rights reserved.
    * gnsx security configuration file
    * Note:
    * 1. At installation time, java.security file in your jre's lib/security/ directory should
    * be edited add the following line (ganx.home} is the home directory of gsnx install
    * login.config.url.1=file:{gsnx.home}/conf/login.config
    * 2. Edit this file to replace ldap-url with the correct url for your
    * LoginModule configuration.
    * @version $Id: login.config,v 1.8 2006/02/04 18:38:11 vgeorge Exp $
    gsnx.security.Login
    * LdapLoginModule Options:
    * ldap-url - LDAP connection URL, such as ldap://localhost:389
    * e.g. ldap-url="ldap://192.168.31.63:389"
    * dn-mask - The DN mask for user. The number 0 refers to the parameter
    * number for username. If this is specified then user search is ingored.
    * **This option is DEPRECATED**.
    * e.g. dn-mask="cn={0},ou=People,dc=gsnx,dc=com"
    * initialContextFactory - Class name of the initial context factory
    * e.g. initialContextFactory="com.sun.jndi.ldap.LdapCtxFactory"
    * connection-username - The DN used by the login module itself for authentication to
    * the directory server.
    * e.g. connection-username="uid=admin,ou=system"
    * connection-password - Password that is used by the login module to authenticate
    * itself to the directory server
    * e.g. connection-password="password"
    * connectionProtocol - The security protocol to use. This value is determined by the
    * service provider. An example would be SSL.
    * e.g. connectionProtocol="ldap"
    * authentication - The security level to use. Its value is one of the following
    * strings: "none", "simple", "strong".
    * e.g. authentication="simple"
    * user-search-base - the base DN for the user search
    * e.g. user-search-base="ou=users,ou=system"
    * user-search-pattern - filter specification how to search for the user object.
    * RFC 2254 filters are allowed. For example: (uid=user1). To parameterize the value
    * of the CN attribute type, specify (uid = {0}). The number 0 refers to the parameter
    * number for username. This query must return exactly one object.
    * e.g. user-search-pattern="uid={0}"
    * user-search-scope-subtree - Directory search scope for the user (ture | false)
    * ture - directory search scope is SUBTREE
    * false - directory search scope is ONE-LEVEL
    * e.g. user-search-scope-subtree="true"
    * user-password-changepw-gsnx-handler - hander to change password
    * If this handler is not specified, the change password feature will not be available
    * if "com.gsnx.core.server.security.LdapLoginModule", is used the the
    * option - user-password-attribute is required.
    * e.g. user-password-changepw-gsnx-handler="com.gsnx.core.server.security.LdapLoginModule"
    * user-password-attribute - attribute in LDAP that stores the user password
    * If this is not specified, the change password feature will not be work
    * when "com.gsnx.core.server.security.LdapLoginModule", is used as
    * user-password-changepw-gsnx-handler
    * e.g. user-password-attribute="userPassword"
    * role-search-base - The base DN for the group membership search
    * e.g. role-search-base="ou=groups,ou=system"
    * role-name-attribute - LDAP attribute type that identifies group name attribute in the object
    * returned from the group membership query. The group membership query is defined
    * by the role-search-pattern parameter. Typically the group name parameter is "cn".
    * e.g. role-name="cn"
    * role-search-pattern - The filter specification how to search for the role object.
    * RFC 2254 filters are allowed. For example: (uniqueMember = {0}). The number 0
    * refers refers to the DN of the authenticated user. Note that if role membership
    * for the user is defined in the member-of-like attribute (see user-role-name
    * parameter) you may not need to search for group membership with the query.
    * e.g. role-search-pattern="(uniqueMember={0})"
    * role-search-scope-subtree - Directory search scope for the role (ture | false).
    * ture - directory search scope is SUBTREE
    * false - directory search scope is ONE-LEVEL
    * e.g. role-search-scope-subtree="false"
    * user-role-attribute - LDAP attribute type for the user group membership. Different LDAP
    * schemas represent user group membership in different ways such as memberOf,
    * isMemberOf, member, etc. Values of these attributes are identifiers of groups that
    * a user is a member of. For example, if you have: memberOf: cn=admin,ou=groups,dc=gsnx,
    * specify "memberOf" as the value for the user-role-name attribute. Be aware of the
    * relationship between this parameter and group membership query. Typically
    * they will return the same data.
    * e.g. user-role-name="memberOf"
    * ldap://ldap-qa.dev.e2open.com:389
    * ldap://192.168.31.63:389
    com.gsnx.core.server.security.PassthruLoginModule required
    *com.gsnx.core.server.security.LdapLoginModule required
    initial-context-factory="com.sun.jndi.ldap.LdapCtxFactory"
    ldap-url="ldap://ldap-qa.dev.e2open.com:389"
    connection-username="cn=Manager,dc=gsnx,dc=com"
    connection-password="slapface"
    connection-protocol="ldap"
    authentication="simple"
    user-search-base="dc=gsnx,dc=com"
    user-search-pattern="cn={0}"
    user-search-scope-subtree="true"
    user-password-changepw-gsnx-handler="com.gsnx.core.server.security.PassthruLoginModule"
    user-password-attribute="userPassword"
    role-search-base=""
    role-name-attribute=""
    role-search-pattern=""
    role-search-scope-subtree=""
    user-role-attribute="";
    This is LoginConfig.xml
    <?xml version="1.0" encoding="UTF-8"?>
    <!-- The XML based JAAS login configuration read by the
    org.jboss.security.auth.login.XMLLoginConfig mbean. Add
    an application-policy element for each security domain.
    The outline of the application-policy is:
    <application-policy name="security-domain-name">
    <authentication>
    <login-module code="login.module1.class.name" flag="control_flag">
    <module-option name = "option1-name">option1-value</module-option>
    <module-option name = "option2-name">option2-value</module-option>
    </login-module>
    <login-module code="login.module2.class.name" flag="control_flag">
    </login-module>
    </authentication>
    </application-policy>
    $Revision: 1.12.2.2 $
    --><!DOCTYPE policy PUBLIC "-//JBoss//DTD JBOSS Security Config 3.0//EN" "http://www.jboss.org/j2ee/dtd/security_config.dtd">
    <policy>
    <!-- Used by clients within the application server VM such as
    mbeans and servlets that access EJBs.
    -->
    <application-policy name="gsnx.security.Login">
    <authentication>
    <!-- <login-module code="com.gsnx.core.server.security.LdapLoginModule" flag="required">-->
    <login-module code="com.gsnx.core.server.security.PassthruLoginModule" flag="required">
    <module-option name="initial-context-factory">com.sun.jndi.ldap.LdapCtxFactory
    </module-option>
    <!--<module-option name="user-password-changepw-gsnx-handler">com.gsnx.core.server.security.LdapLoginModule-->
    <module-option name="user-password-changepw-gsnx-handler">com.gsnx.core.server.security.PassthruLoginModule
    </module-option>
    <!-- <module-option name="ldap-url">ldap://192.168.31.63:389</module-option> -->
    <!-- <module-option name="ldap-url">ldap://10.120.17.253:389</module-option> -->
    <module-option name="ldap-url">ldap://ldap-qa.dev.e2open.com:389</module-option>
    <module-option name="connection-username">cn=Manager,dc=gsnx,dc=com</module-option>
    <module-option name="connection-password">slapface</module-option>
    <module-option name="connection-protocol">ldap</module-option>
    <module-option name="authentication">simple</module-option>
    <module-option name="user-search-base">dc=gsnx,dc=com</module-option>
    <module-option name="user-search-pattern">cn={0}</module-option>
    <module-option name="user-search-scope-subtree">true</module-option>
    <module-option name="user-password-attribute"/>
    <module-option name="role-search-base"/>
    <module-option name="role-name-attribute"/>
    <module-option name="role-search-pattern"/>
    <module-option name="role-search-scope-subtree"/>
    <module-option name="user-role-attribute"/>
    </login-module>
    </authentication>
    </application-policy>
    <application-policy name="client-login">
    <authentication>
    <login-module code="org.jboss.security.ClientLoginModule" flag="required">
    </login-module>
    </authentication>
    </application-policy>
    <!-- Security domain for JBossMQ -->
    <application-policy name="jbossmq">
    <authentication>
    <login-module code="org.jboss.security.auth.spi.DatabaseServerLoginModule" flag="required">
    <module-option name="unauthenticatedIdentity">guest</module-option>
    <module-option name="dsJndiName">java:/DefaultDS</module-option>
    <module-option name="principalsQuery">SELECT PASSWD FROM JMS_USERS WHERE USERID=?</module-option>
    <module-option name="rolesQuery">SELECT ROLEID, 'Roles' FROM JMS_ROLES WHERE USERID=?</module-option>
    </login-module>
    </authentication>
    </application-policy>
    <!-- Security domain for JBossMQ when using file-state-service.xml
    <application-policy name = "jbossmq">
    <authentication>
    <login-module code = "org.jboss.mq.sm.file.DynamicLoginModule"
    flag = "required">
    <module-option name = "unauthenticatedIdentity">guest</module-option>
    <module-option name = "sm.objectname">jboss.mq:service=StateManager</module-option>
    </login-module>
    </authentication>
    </application-policy>
    -->
    <!-- Security domains for testing new jca framework -->
    <application-policy name="HsqlDbRealm">
    <authentication>
    <login-module code="org.jboss.resource.security.ConfiguredIdentityLoginModule" flag="required">
    <module-option name="principal">sa</module-option>
    <module-option name="userName">sa</module-option>
    <module-option name="password"/>
    <module-option name="managedConnectionFactoryName">jboss.jca:service=LocalTxCM,name=DefaultDS</module-option>
    </login-module>
    </authentication>
    </application-policy>
    <application-policy name="JmsXARealm">
    <authentication>
    <login-module code="org.jboss.resource.security.ConfiguredIdentityLoginModule" flag="required">
    <module-option name="principal">guest</module-option>
    <module-option name="userName">guest</module-option>
    <module-option name="password">guest</module-option>
    <module-option name="managedConnectionFactoryName">jboss.jca:service=TxCM,name=JmsXA</module-option>
    </login-module>
    </authentication>
    </application-policy>
    <!-- A template configuration for the jmx-console web application. This
    defaults to the UsersRolesLoginModule the same as other and should be
    changed to a stronger authentication mechanism as required.
    -->
    <application-policy name="jmx-console">
    <authentication>
    <login-module code="org.jboss.security.auth.spi.UsersRolesLoginModule" flag="required">
    <module-option name="usersProperties">props/jmx-console-users.properties</module-option>
    <module-option name="rolesProperties">props/jmx-console-roles.properties</module-option>
    </login-module>
    </authentication>
    </application-policy>
    <!-- A template configuration for the web-console web application. This
    defaults to the UsersRolesLoginModule the same as other and should be
    changed to a stronger authentication mechanism as required.
    -->
    <application-policy name="web-console">
    <authentication>
    <login-module code="org.jboss.security.auth.spi.UsersRolesLoginModule" flag="required">
    <module-option name="usersProperties">web-console-users.properties</module-option>
    <module-option name="rolesProperties">web-console-roles.properties</module-option>
    </login-module>
    </authentication>
    </application-policy>
    <!-- A template configuration for the JBossWS web application (and transport layer!).
    This defaults to the UsersRolesLoginModule the same as other and should be
    changed to a stronger authentication mechanism as required.
    -->
    <application-policy name="JBossWS">
    <authentication>
    <login-module code="org.jboss.security.auth.spi.UsersRolesLoginModule" flag="required">
    <module-option name="unauthenticatedIdentity">anonymous</module-option>
    </login-module>
    </authentication>
    </application-policy>
    <!-- The default login configuration used by any security domain that
    does not have a application-policy entry with a matching name
    -->
    <application-policy name="other">
    <!-- A simple server login module, which can be used when the number
    of users is relatively small. It uses two properties files:
    users.properties, which holds users (key) and their password (value).
    roles.properties, which holds users (key) and a comma-separated list of
    their roles (value).
    The unauthenticatedIdentity property defines the name of the principal
    that will be used when a null username and password are presented as is
    the case for an unuathenticated web client or MDB. If you want to
    allow such users to be authenticated add the property, e.g.,
    unauthenticatedIdentity="nobody"
    -->
    <authentication>
    <login-module code="org.jboss.security.auth.spi.UsersRolesLoginModule" flag="required"/>
    </authentication>
    </application-policy>
    </policy>
    This is : java.security
    # This is the "master security properties file".
    # In this file, various security properties are set for use by
    # java.security classes. This is where users can statically register
    # Cryptography Package Providers ("providers" for short). The term
    # "provider" refers to a package or set of packages that supply a
    # concrete implementation of a subset of the cryptography aspects of
    # the Java Security API. A provider may, for example, implement one or
    # more digital signature algorithms or message digest algorithms.
    # Each provider must implement a subclass of the Provider class.
    # To register a provider in this master security properties file,
    # specify the Provider subclass name and priority in the format
    # security.provider.<n>=<className>
    # This declares a provider, and specifies its preference
    # order n. The preference order is the order in which providers are
    # searched for requested algorithms (when no specific provider is
    # requested). The order is 1-based; 1 is the most preferred, followed
    # by 2, and so on.
    # <className> must specify the subclass of the Provider class whose
    # constructor sets the values of various properties that are required
    # for the Java Security API to look up the algorithms or other
    # facilities implemented by the provider.
    # There must be at least one provider specification in java.security.
    # There is a default provider that comes standard with the JDK. It
    # is called the "SUN" provider, and its Provider subclass
    # named Sun appears in the sun.security.provider package. Thus, the
    # "SUN" provider is registered via the following:
    # security.provider.1=sun.security.provider.Sun
    # (The number 1 is used for the default provider.)
    # Note: Statically registered Provider subclasses are instantiated
    # when the system is initialized. Providers can be dynamically
    # registered instead by calls to either the addProvider or
    # insertProviderAt method in the Security class.
    # List of providers and their preference orders (see above):
    security.provider.1=sun.security.provider.Sun
    security.provider.2=sun.security.rsa.SunRsaSign
    security.provider.3=com.sun.net.ssl.internal.ssl.Provider
    security.provider.4=com.sun.crypto.provider.SunJCE
    security.provider.5=sun.security.jgss.SunProvider
    security.provider.6=com.sun.security.sasl.Provider
    # Select the source of seed data for SecureRandom. By default an
    # attempt is made to use the entropy gathering device specified by
    # the securerandom.source property. If an exception occurs when
    # accessing the URL then the traditional system/thread activity
    # algorithm is used.
    # On Solaris and Linux systems, if file:/dev/urandom is specified and it
    # exists, a special SecureRandom implementation is activated by default.
    # This "NativePRNG" reads random bytes directly from /dev/urandom.
    # On Windows systems, the URLs file:/dev/random and file:/dev/urandom
    # enables use of the Microsoft CryptoAPI seed functionality.
    securerandom.source=file:/dev/urandom
    # The entropy gathering device is described as a URL and can also
    # be specified with the system property "java.security.egd". For example,
    # -Djava.security.egd=file:/dev/urandom
    # Specifying this system property will override the securerandom.source
    # setting.
    # Class to instantiate as the javax.security.auth.login.Configuration
    # provider.
    login.configuration.provider=com.sun.security.auth.login.ConfigFile
    # Default login configuration file
    login.config.url.1=C:\LMS\gsnx\core\src\conf\login.config
    # Class to instantiate as the system Policy. This is the name of the class
    # that will be used as the Policy object.
    policy.provider=sun.security.provider.PolicyFile
    # The default is to have a single system-wide policy file,
    # and a policy file in the user's home directory.
    policy.url.1=file:${java.home}/lib/security/java.policy
    policy.url.2=file:${user.home}/.java.policy
    # whether or not we expand properties in the policy file
    # if this is set to false, properties (${...}) will not be expanded in policy
    # files.
    policy.expandProperties=true
    # whether or not we allow an extra policy to be passed on the command line
    # with -Djava.security.policy=somefile. Comment out this line to disable
    # this feature.
    policy.allowSystemProperty=true
    # whether or not we look into the IdentityScope for trusted Identities
    # when encountering a 1.1 signed JAR file. If the identity is found
    # and is trusted, we grant it AllPermission.
    policy.ignoreIdentityScope=false
    # Default keystore type.
    keystore.type=jks
    # Class to instantiate as the system scope:
    system.scope=sun.security.provider.IdentityDatabase
    # List of comma-separated packages that start with or equal this string
    # will cause a security exception to be thrown when
    # passed to checkPackageAccess unless the
    # corresponding RuntimePermission ("accessClassInPackage."+package) has
    # been granted.
    package.access=sun.
    # List of comma-separated packages that start with or equal this string
    # will cause a security exception to be thrown when
    # passed to checkPackageDefinition unless the
    # corresponding RuntimePermission ("defineClassInPackage."+package) has
    # been granted.
    # by default, no packages are restricted for definition, and none of
    # the class loaders supplied with the JDK call checkPackageDefinition.
    #package.definition=
    # Determines whether this properties file can be appended to
    # or overridden on the command line via -Djava.security.properties
    security.overridePropertiesFile=true
    # Determines the default key and trust manager factory algorithms for
    # the javax.net.ssl package.
    ssl.KeyManagerFactory.algorithm=SunX509
    ssl.TrustManagerFactory.algorithm=PKIX
    # Determines the default SSLSocketFactory and SSLServerSocketFactory
    # provider implementations for the javax.net.ssl package. If, due to
    # export and/or import regulations, the providers are not allowed to be
    # replaced, changing these values will produce non-functional
    # SocketFactory or ServerSocketFactory implementations.
    #ssl.SocketFactory.provider=
    #ssl.ServerSocketFactory.provider=
    # The Java-level namelookup cache policy for successful lookups:
    # any negative value: caching forever
    # any positive value: the number of seconds to cache an address for
    # zero: do not cache
    # default value is forever (FOREVER). For security reasons, this
    # caching is made forever when a security manager is set.
    # NOTE: setting this to anything other than the default value can have
    # serious security implications. Do not set it unless
    # you are sure you are not exposed to DNS spoofing attack.
    #networkaddress.cache.ttl=-1
    # The Java-level namelookup cache policy for failed lookups:
    # any negative value: cache forever
    # any positive value: the number of seconds to cache negative lookup results
    # zero: do not cache
    # In some Microsoft Windows networking environments that employ
    # the WINS name service in addition to DNS, name service lookups
    # that fail may take a noticeably long time to return (approx. 5 seconds).
    # For this reason the default caching policy is to maintain these
    # results for 10 seconds.
    networkaddress.cache.negative.ttl=10
    # Properties to configure OCSP for certificate revocation checking
    # Enable OCSP
    # By default, OCSP is not used for certificate revocation checking.
    # This property enables the use of OCSP when set to the value "true".
    # NOTE: SocketPermission is required to connect to an OCSP responder.
    # Example,
    # ocsp.enable=true
    # Location of the OCSP responder
    # By default, the location of the OCSP responder is determined implicitly
    # from the certificate being validated. This property explicitly specifies
    # the location of the OCSP responder. The property is used when the
    # Authority Information Access extension (defined in RFC 3280) is absent
    # from the certificate or when it requires overriding.
    # Example,
    # ocsp.responderURL=http://ocsp.example.net:80
    # Subject name of the OCSP responder's certificate
    # By default, the certificate of the OCSP responder is that of the issuer
    # of the certificate being validated. This property identifies the certificate
    # of the OCSP responder when the default does not apply. Its value is a string
    # distinguished name (defined in RFC 2253) which identifies a certificate in
    # the set of certificates supplied during cert path validation. In cases where
    # the subject name alone is not sufficient to uniquely identify the certificate
    # then both the "ocsp.responderCertIssuerName" and
    # "ocsp.responderCertSerialNumber" properties must be used instead. When this
    # property is set then those two properties are ignored.
    # Example,
    # ocsp.responderCertSubjectName="CN=OCSP Responder, O=XYZ Corp"
    # Issuer name of the OCSP responder's certificate
    # By default, the certificate of the OCSP responder is that of the issuer
    # of the certificate being validated. This property identifies the certificate
    # of the OCSP responder when the default does not apply. Its value is a string
    # distinguished name (defined in RFC 2253) which identifies a certificate in
    # the set of certificates supplied during cert path validation. When this
    # property is set then the "ocsp.responderCertSerialNumber" property must also
    # be set. When the "ocsp.responderCertSubjectName" property is set then this
    # property is ignored.
    # Example,
    # ocsp.responderCertIssuerName="CN=Enterprise CA, O=XYZ Corp"
    # Serial number of the OCSP responder's certificate
    # By default, the certificate of the OCSP responder is that of the issuer
    # of the certificate being validated. This property identifies the certificate
    # of the OCSP responder when the default does not apply. Its value is a string
    # of hexadecimal digits (colon or space separators may be present) which
    # identifies a certificate in the set of certificates supplied during cert path
    # validation. When this property is set then the "ocsp.responderCertIssuerName"
    # property must also be set. When the "ocsp.responderCertSubjectName" property
    # is set then this property is ignored.
    # Example,
    # ocsp.responderCertSerialNumber=2A:FF:00

    user564785 wrote:
    I am trying to installl Oracle 11gR2 on VM machine with Redhat Linux 6-64bit. That is a Oracle database problem. Even though the Oracle product uses java it still represents a problem with that product (Oracle) rather than java. So you need to start with an Oracle database forum.

  • Error 6 occurred at Open/Creat​e/Replace File in NI_Excel.l​vclass:Sav​e Report to

    I have an application that works on LV60 and when run in LV86 I get the following error:
    Error 6 occurred at Open/Create/Replace File in NI_Excel.lvclassave Report to File.vi->SWF001 Test.vi
    Possible reason(s):
    LabVIEW: Generic file I/O error.
    =========================
    NI-488: I/O operation aborted.
    C:\SWF001 IO Files\1_Single.xls
    The application does:
    New Report.vi
    Excel Get Worksheet.vi
    Excel Easy Table.vi
    Excel Easy Text.vi (4 of them)
    Save Report to File.vi
    Dispose Report.vi
    all in a nice chain (errors and report-in/outs) as is typical
    For some reason I'm getting this error. I'm using it with Excel 2003 SP3 and the spreadsheet contains macros (and I get a prompt "should I enable macros:" (I answer yes), that I didn't get with LV60 and the older version of Excel). This is probably not the problem? but deserves mention.
    The file does exist on the system (and it appears the application is writing to it successfully - though perhaps truncated as the error indicates- i can't tell for sure).
    I can open and save the Excel file independently of LV just fine.
    Any clues?
    Many thanks! -David
    Solved!
    Go to Solution.

    Unfortunately, an Error 6 is a catch all category for all sorts of file IO errors that LV doesn't know how to handle. For example, while it doesn't sound like your problem, trying to access a file that resides on a network drive that isn't currently mounted can generate an Error 6.
    Have you tried saving the file to a different location?
    If you restart your computer and run the VI does it generate the error the very first time you run it?
    Can you post your code - or at least a subset of it that demonstrates the problem?
    Is Excel open when you are running the code?
    Also, as a test try modifying your code so it has a unique name every time you save it.
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • SQL Error: ORA-12899: value too large for column

    Hi,
    I'm trying to understand the above error. It occurs when we are migrating data from one oracle database to another:
    Error report:
    SQL Error: ORA-12899: value too large for column "USER_XYZ"."TAB_XYZ"."COL_XYZ" (actual: 10, maximum: 8)
    12899. 00000 - "value too large for column %s (actual: %s, maximum: %s)"
    *Cause:    An attempt was made to insert or update a column with a value
    which is too wide for the width of the destination column.
    The name of the column is given, along with the actual width
    of the value, and the maximum allowed width of the column.
    Note that widths are reported in characters if character length
    semantics are in effect for the column, otherwise widths are
    reported in bytes.
    *Action:   Examine the SQL statement for correctness.  Check source
    and destination column data types.
    Either make the destination column wider, or use a subset
    of the source column (i.e. use substring).
    The source database runs - Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    The target database runs - Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    The source and target table are identical and the column definitions are exactly the same. The column we get the error on is of CHAR(8). To migrate the data we use either a dblink or oracle datapump, both result in the same error. The data in the column is a fixed length string of 8 characters.
    To resolve the error the column "COL_XYZ" gets widened by:
    alter table TAB_XYZ modify (COL_XYZ varchar2(10));
    -alter table TAB_XYZ succeeded.
    We now move the data from the source into the target table without problem and then run:
    select max(length(COL_XYZ)) from TAB_XYZ;
    -8
    So the maximal string length for this column is 8 characters. To reduce the column width back to its original 8, we then run:
    alter table TAB_XYZ modify (COL_XYZ varchar2(8));
    -Error report:
    SQL Error: ORA-01441: cannot decrease column length because some value is too big
    01441. 00000 - "cannot decrease column length because some value is too big"
    *Cause:   
    *Action:
    So we leave the column width at 10, but the curious thing is - once we have the data in the target table, we can then truncate the same table at source (ie. get rid of all the data) and move the data back in the original table (with COL_XYZ set at CHAR(8)) - without any issue.
    My guess the error has something to do with the storage on the target database, but I would like to understand why. If anybody has an idea or suggestion what to look for - much appreciated.
    Cheers.

    843217 wrote:
    Note that widths are reported in characters if character length
    semantics are in effect for the column, otherwise widths are
    reported in bytes.You are looking at character lengths vs byte lengths.
    The data in the column is a fixed length string of 8 characters.
    select max(length(COL_XYZ)) from TAB_XYZ;
    -8
    So the maximal string length for this column is 8 characters. To reduce the column width back to its original 8, we then run:
    alter table TAB_XYZ modify (COL_XYZ varchar2(8));varchar2(8 byte) or varchar2(8 char)?
    Use SQL Reference for datatype specification, length function, etc.
    For more info, reference {forum:id=50} forum on the topic. And of course, the Globalization support guide.

  • Oracle Reports - Multi lingual pdf - jvm error

    Hi all,
    We are trying to develop a multilingual pdf report in oracle reports 9i. When we do font subsetting or embedding we get the error:
    stackpointer=3bb03200
    JVMXM004: JVM is performing abort shutdown sequence
    this error is consistent for font embedding, but for subsetting this error comes randomly. We are unable to find out any pattern of the error. Can any one pls give us an update on this.

    1) Please try in latest patchset 9.0.2.3
    2) There is recently a very similar bug logged on the issue by a customer ( do not know whether it is by you :-))
    BUG 3880762 - REPORTS ENGINE CRASHES WHEN GENERATING LARGE PDF SUBSETTING OUTPUT
    (Not yet fixed, this bug in initial description phase)
    (It gives same error)
    If this is critical to you then can try getting in touch with Support to escalate the priority of the bug.
    (First confirm with support whether it is the same issue)
    [    All Docs for all versions    ]
    http://otn.oracle.com/documentation/reports.html
    [     Publishing reports to web  - 10G  ]
    http://download.oracle.com/docs/html/B10314_01/toc.htm (html)
    http://download.oracle.com/docs/pdf/B10314_01.pdf (pdf)
    [   Building reports  - 10G ]
    http://download.oracle.com/docs/pdf/B10602_01.pdf (pdf)
    http://download.oracle.com/docs/html/B10602_01/toc.htm (html)
    [   Forms Reports Integration whitepaper  9i ]
    http://otn.oracle.com/products/forms/pdf/frm9isrw9i.pdf
    ---------------------------------------------------------------------------------

  • Error while importing text file to Power Pivot via Table Import Wizard

    Hello there,
    I'm a rookie at handling Excel to be honest. I've been trying to import a .txt file to Excel (it contains around 14 million rows of information) but Excel won't let me since it can only handle around 1 million rows per sheet (too troublesome to do such thing
    14 times in 14 different sheets). Then I googled that Power Pivot let me handle around 100 million rows in a Pivot.
    I'm trying to import this .txt file to Power Pivot by doing the following:
    Home tab > From Text > File Path (insert .txt location here) > Finish
    After that it shows the Table Import Wizard retrieving the rows from the .txt file. Problem is... at the end of the import (once it hits row 14,000,000 or so) it shows an error. It states the following:
    "Memory error: Allocation failure : Not enough storage is available to process this command. .
    The current operation was cancelled because another operation in the transaction failed."
    Please note that the size of the .txt file is 900 MB.
    Another thing is that, when I select the .txt file from its location at the File Path section, the "preview" field only shows 50 rows, not all of them. Is that okay to happen?
    Is it because the size of the .txt file is too big?
    Please let me know if you need more info in regards to this.
    Thanks in advance.

    Hi AdiUser,
    Only showing 50 rows for the preview is normal behaviour. Based on what you've described, it sounds like Power Pivot is running out of memory when you are loading this .txt file. Are you running the 32 bit or 64 bit version of Excel. Also, if you are running
    the 64 bit version how much RAM is available?
    If you're running the 32 bit version then the available RAM for Power Pivot is less than 1 GB even if the system has more than this installed. You can address this by installing the 64 bit version. If you're running the 64 bit version then it may be
    that you don't have enough RAM to accommodate this amount of data and you may consider upgrading the RAM. Alternatively, you could try and reduce the size of the table in memory by using the preview screen to remove any columns that are not of use. This
    is especially important if there are any column with a high number of unique values that aren't actually needed in the model. In addition to removing columns altogether, you could try filtering the data so that only a subset of the rows are imported.
    This can also be done from the preview screen.
    Regards,
    Michael Amadi
    Please use the 'Mark as answer' link to mark a post that answers your question. If you find a reply helpful, please remember to vote it as helpful :)
    Website: http://www.nimblelearn.com
    Blog: http://www.nimblelearn.com/blog
    Twitter: @nimblelearn

  • Error With Export/Print from Crystal Report Viewer

    Hello there,
    I've searched through the web and SAP discussion boards with not much luck with this issue.
    After working through this for some days now I've decided to look here for help.
    Environment:
    I have created a web Crystal Report viewer application(Developed with SBOP BI Platform 4.0 SP06 .NET SDK Runtime) that communicates with a managed Cyrstal Server 2011 SP4 (Product 14.0)
    I am able to connect and authenticate with the server, retrieve a token for communication and display reports in the Crystal report Viewer successfully.
    Problem:
    When I attempt to export, I receive the prompt to select format and pages.
    When I click export after selections most times I receive an error with the text
    Unable to cast COM object of type 'System.__ComObject' to interface type 'CrystalDecisions.ReportAppServer.DataDefModel.PropertyBag'. This operation failed because the QueryInterface call on the COM component for the interface with IID '{74EEBC42-6C5D-11D3-9172-00902741EE7C}' failed due to the following error: No such interface supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)).
    Other times the page simply refreshes on export.
    When I click to print, no print dialog is displayed the page always refreshes and no error is displayed.
    No Print or Export document is ever created.
    As many print/export issues seems to be related, I'm guessing this two issues are as well.
    Notes:
    I am utilizing the ReportClientDocument model
    I am storing this in session to use as the crystal report viewer report source on postbacks
    I am assigning a subset of export formats to the crystal report viewer
    I am setting particular parameters as well on the report source
    At this point I would appreciate every assistance I may receive on this issue
    Thanks in advance,
    Below is the pertinent code
    Code:
    <aspx>
       <CR:CrystalReportViewer ID="CrystalReportViewer1" runat="server"
       AutoDataBind="true" EnableDatabaseLogonPrompt="False"
       BestFitPage="False" ReuseParameterValuesOnRefresh="True"
      CssClass="reportFrame" Height="1000px" Width="1100px" EnableDrillDown="False"
      ToolPanelView="None" PrintMode="Pdf"/>
    <Codebehind>
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Web;
    using System.Web.UI;
    using System.Web.UI.WebControls;
    using CrystalDecisions.Enterprise;
    using CrystalDecisions.ReportAppServer.ClientDoc;
    using CrystalDecisions.ReportAppServer.CommonObjectModel;
    using CrystalDecisions.ReportAppServer.Controllers;
    using CrystalDecisions.ReportAppServer.DataDefModel;
    using CrystalDecisions.ReportAppServer.ReportDefModel;
    using CrystalDecisions.Shared;
    namespace ClassicInternalReportPage
        public partial class Reports : System.Web.UI.Page
            protected override void OnInit(EventArgs e)
                base.OnInit(e);
                if (!String.IsNullOrEmpty(Convert.ToString(Session["LogonToken"])) && !IsPostBack)
                    SessionMgr sessionMgr = new SessionMgr();
                    EnterpriseSession enterpriseSession = sessionMgr.LogonWithToken(Session["LogonToken"].ToString());
                    EnterpriseService reportService = enterpriseSession.GetService("RASReportFactory");
                    InfoStore infoStore = new InfoStore(enterpriseSession.GetService("InfoStore"));
                    if (reportService != null)
                        string queryString = String.Format("Select SI_ID, SI_NAME, SI_PARENTID From CI_INFOOBJECTS "
                           + "Where SI_PROGID='CrystalEnterprise.Report' "
                           + "And SI_ID = {0} "
                           + "And SI_INSTANCE = 0", Request.QueryString["rId"]);
                        InfoObjects infoObjects = infoStore.Query(queryString);
                        ReportAppFactory reportFactory = (ReportAppFactory)reportService.Interface;
                        if (infoObjects != null && infoObjects.Count > 0)
                            ISCDReportClientDocument reportSource = reportFactory.OpenDocument(infoObjects[1].ID, 0);
                            Session["ReportClDocument"] = AssignReportParameters(reportSource) ? reportSource : null;
                            CrystalReportViewer1.ReportSource = Session["ReportClDocument"];
                            CrystalReportViewer1.DataBind();
                //Viewer options
                // Don't enable prompting for Live and Custom
                CrystalReportViewer1.EnableParameterPrompt = !(Request.QueryString["t"] == "1" || Request.QueryString["t"] == "4");
                CrystalReportViewer1.HasToggleParameterPanelButton = CrystalReportViewer1.EnableParameterPrompt;
                CrystalReportViewer1.AllowedExportFormats = (int)(ViewerExportFormats.PdfFormat | ViewerExportFormats.ExcelFormat | ViewerExportFormats.XLSXFormat | ViewerExportFormats.CsvFormat);
            protected void Page_Load(object sender, EventArgs e)
                if (IsPostBack && CrystalReportViewer1.ReportSource == null)
                    CrystalReportViewer1.ReportSource = Session["ReportClDocument"];
                    CrystalReportViewer1.DataBind();
            private bool AssignReportParameters(ISCDReportClientDocument reportSource)
                bool success = true;
                if (Request.QueryString["t"] == "1" || Request.QueryString["t"] == "2" || Request.QueryString["t"] == "4" )
                    reportSource.DataDefController.ParameterFieldController.SetCurrentValue("", "STORE", Session["storeParam"]);
                    if (Request.QueryString["t"] == "2")
                        reportSource.DataDefController.ParameterFieldController.SetCurrentValue("", "FromDate", Request.QueryString["fromdate"]);
                        reportSource.DataDefController.ParameterFieldController.SetCurrentValue("", "ToDate", Request.QueryString["todate"]);
                else if (Request.QueryString["t"] == "3")
                    reportSource.DataDefController.ParameterFieldController.SetCurrentValue("", "SKU", Request.QueryString["sku"]);
                else
                    //Unknown report type alert
                    success = false;
                return success;

    Thanks Don for your response,
    I'm new to the SCN spaces and my content has been moved a couple of times already.
    In response to your questions
    The runtime is installed on the web application server, if by that you mean the machine hosting the created .NET SDK application.
    My question was whether it was also required on the Crystal Server 2011 (I.E. the main enterprise server with CMS and Report management and I guess RAS and all that). I figured this would remain untouched and queries would simply be made against it to retrieve/view reports e.t.c
    If install of the SDK on Crystal Server 2011 is indeed required should I expect any interruption to any of the core services after a restart. I.E. I'm hoping that none of the SDK objects would interfere with the existing server objects (in SAP Business Objects)Reason I ask is I note that much of the SDK install directories are similar to the existing Crystal Enterprise Server 2011 (Product 14.0.0)
    Is this temp folder to be manually created/configured or is it created by the application automatically to perform tasks. Or are you referring to the default C:\Windows\Temp directory and so saying that the application would try to use this for print and export tasks?Once I'm sure which I'd give the app pool user permission
    Printing is to be client side but I figured by default (with the Crystal Report Viewer) it would simply pool and print from the user's printer. This is how it works with the previously used URL reporting approach (viewrpt.cwr). Therefore a user can print the document from wherever they are with their own printer.We don't intend on printing from the server machine, but are you suggesting that a printer must be installed on server (which one web or enterprise server) for any client side printing to work.
    App pool is running in 32 bit mode
    Initially didn't get anything useful from fiddler but I'd try and look closer on your suggestion.
    It's also possible that some of my questions are a misunderstanding of APP vs RAS vs WEB, so please feel free to clarify. Currently I see the Web server as simply the created .NET SDK Application and RAS (Crystal Server 2011 e.t.c) as the existing fully established Application server which I simply pool for information.
    Thank you for your patience and advice,

Maybe you are looking for

  • Extern monitor only when connected?

    hi, i have a big problem, i have a macbook pro 3,1 running arch with nvidia driver, and i have an extern monitior, which i want to use, if plugged in on dvi. i want to use the extern monitor only, no macbook screen, i know i can enable the monitor wi

  • To change assignment field (ZUONR) in accounting document

    Hi Gurus, I have a requirement to change the value of the field (ZUONR) with delivery number for particular g/l account in the accounting document when i will go to accounting information from vf03. Please help me in this regard. Thanks & Regards, R.

  • OS X Lion volume custom mount point identified as Binary Document

    Hi! I have a large partition for photos and i mounted it in home folder with /etc/fstab (using vifs): UUID=372A78A3-E57A-31AF-AF41-4732DB625394 /Users/vanburg/Photo hfs rw,auto It works great in Finder, but when i try to open it in Bridge it opens it

  • How to online datafile of rollback segment of NO archive log available

    I set offline datafile of rool back segement and rename it but when I try to online , get error to recover I try to recover but unfortunately all archive log was deleted ( kind of cron job in unix aotu delete these file) Pls advice how can I set this

  • How to enable Document is Tagged PDF

    Hi, Kindly let me know how to enable "Document is Tagged PDF" using Scripting. For more references I have attached the screenshot. Thanks for looking into this. Regards, Sudhakar