Data loss when reading v7 hashes with db 4.7.25

On a v7 hash created with db_load from db 3.2.9, db_dump from 4.7.25 will corrupt the database file unless either -r or -R are specified. With a v8 btree, 4.7's db_dump -p will output correctly but a db_dump -d a will not. In either case, dumping the v8 btree does not appear to lead to data loss.
4.7's db_verify on the v7 hash gives an error about an impossible max_buckets setting in the metadata page. It seems to do this because it clobbers the last_pgno setting
and compares 1, for instance, against 0 instead of 2.
This patch prevents the clobbering with a 0:
diff --git a/btree/bt_open.c b/btree/bt_open.c
index f03652d..d77c7d6 100644
--- a/btree/bt_open.c
+ b/btree/bt_open.c
@@ -314,7 +314,7 @@ __bam_read_root(dbp, ip, txn, base_pgno, flags)
t->bt_meta = base_pgno;
t->bt_root = meta->root;
- if (PGNO(meta) == PGNO_BASE_MD && !F_ISSET(dbp, DB_AM_RECOVER))
+ if (PGNO(meta) == PGNO_BASE_MD && meta->dbmeta.last_pgno > 0 && !F_ISSET(dbp, DB_AM_RECOVER))
__memp_set_last_pgno(mpf, meta->dbmeta.last_pgno);
} else {
DB_ASSERT(dbp->env,
diff --git a/hash/hash_open.c b/hash/hash_open.c
index f5e1d7f..769b583 100644
--- a/hash/hash_open.c
+ b/hash/hash_open.c
@@ -110,6 +110,7 @@ __ham_open(dbp, ip, txn, name, base_pgno, flags)
if (F_ISSET(&hcp->hdr->dbmeta, DB_HASH_SUBDB))
F_SET(dbp, DB_AM_SUBDB);
if (PGNO(hcp->hdr) == PGNO_BASE_MD &&
+ hcp->hdr->dbmeta.last_pgno > 0 &&
!F_ISSET(dbp, DB_AM_RECOVER))
__memp_set_last_pgno(dbp->mpf,
hcp->hdr->dbmeta.last_pgno);
1) Is this a correct/good fix?
2) Why is db_dump writing to database files?
besides if you are interested in running a free online shop, what's more, you can also put your shop on facebook, please sign up at the following site http://www.miiduu.com/facebook-store?source=fb

Hi,
Do you see the same symptoms if you use the Berkeley DB upgrade utility or API?
http://download.oracle.com/docs/cd/E17076_02/html/programmer_reference/am_upgrade.html
Regards,
Alex Gorrod
Oracle Berkeley DB

Similar Messages

  • How to avoid data repetation when using select statements with innerjoin

    how to avoid data repetation when using select statements with innerjoin.
    thanks in advance,
    satheesh

    you can use a query like this...
      SELECT DISTINCT
             frg~prc_group1                  "Product Group 1
             frg~prc_group2                  "Product Group 2
             frg~prc_group3                  "Product Group 3
             frg~prc_group4                  "Product Group 4
             frg~prc_group5                  "Product Group 5
             prc~product_id                  "Product ID
             txt~short_text                  "Product Description
    UP TO 10 ROWS
    INTO TABLE l_i_data
    FROM
    Joining CRMM_PR_SALESG and
    COMM_PR_FRG_ROD
    crmm_pr_salesg AS frg
    INNER JOIN comm_pr_frg_rod AS prd
    ON frgfrg_guid = prdfragment_guid
    Joining COMM_PRODUCT and
    COMM_PR_FRG_ROD
    INNER JOIN comm_product AS prc
    ON prdproduct_guid = prcproduct_guid
    Joining COMM_PRSHTEXT and
    COMM_PR_FRG_ROD
    INNER JOIN comm_prshtext AS txt
    ON prdproduct_guid = txtproduct_guid
    WHERE frg~prc_group1 IN r_zprc_group1
       AND frg~prc_group2 IN r_zprc_group2
       AND frg~prc_group3 IN r_zprc_group3
       AND frg~prc_group4 IN r_zprc_group4
       AND frg~prc_group5 IN r_zprc_group5.
    reward it it helps
    Edited by: Apan Kumar Motilal on Jun 24, 2008 1:57 PM

  • NO DATA error when running init load with 2LIS_11_VAHDR

    hi experts,
    I try to load initial data with 2LIS_11_VAHDR
    the steps I did:
    1. 2LIS_11_VAHDR is activated in R/3 Log.Cockpit LBWE
    2. 2LIS_11_VAHDR is replacated in BW
    3. Setup tables are filled for SD Sales Orders in R/3
    4. testing 2LIS_11_VAHDR with RSA7 in R/3 delivers 981 records
    when I start the info package with "Initialize delta process with Data Transfer" the error occurs "0 Records - NO DATA"
    Any Idea what I'm doing wrong?
    Best Regards
    neven

    Hi,
    Go to bd87 transaction and see any yellow requests..If so select it and process manually.
    or
    In the monitor  Environment->Transactional RFC -> in the source system.
    Then give user id and pwd for source system and try to execute...see if any LUWs are pending and process them.

  • NTFS data lost when read on Mac OS X Mountain Lion

    I am having external disk with NTFS and using Mac OS X Mountain Lion to work with the data inside the disk.
    However I encountered problems when working with those data in Mac, somehow the data inside several directories on NTFS disk are lost.
    The data lost was not because it was deleted, the data just lost directly when it was opened on the mac.
    From the Mac's Finder, it is said that the size of the directories were zero bytes.
    When I connected the disk to windows machine after that then it keep showing 0 byte. The data was lost permanently just after I opened it from Mac - the data was there previously before I openned it in Mac, but right now it's gone.
    It's happened two times on different NTFS disks, it seems that Mac OS X deleted those files on NTFS disks.
    Has anyone here encounterred the same problems like I did?
    How to prevent it from deleting NTFS data again?
    Is there any way how to recover those lost NTFS data?

    Data Recovery – Best
    Data Recovery – Disk Drill
    Data Recovery – Data Rescue
    Data Recovery – File Salvage
    Data Recovery – Stellar Phoenix
    Data Recovery - uFlysoft
    Data Recovery - Recovering Deleted Files
    Data Recovery - Recovering Deleted Files (2)

  • HT4972 Data loss when updating OS to 5.1.1

    Hi,
    I have a critical problem, i hope anyone can help me.
    I have updated my iphone from 4.3.3 to 5.1.1, but when itunes tried to back my data it gave me error, then its updated the OS to 5.1.1 when i restarted the iphone all my data was deleted (Like Contacts, Photos, Notes,...), is there any way to restore my data ??
    Thanks in advance.

    Sorry to hear about the problem with iOS upgrade. Unfortunately there is NO way you can downgrade the iOS on the phone. Try restart, reset and restore of the phone again. It will fix the issue.

  • Data Loss when a database crashes

    Hi Experts,
    I was asked the question of "how much data is lost when you pull the plug off an oracle database all of a sudden".. and my answer was "all the data in the buffers that have not been committed". We know that you can have committed data sitting in the redo logs (that have not been written to the datafiles) that the instance will use for recovery once the instance is restarted again; however, this got me thinking and asking the question of how much uncommitted data is actually sitting in memory that potentially can be lost if the instance goes down all of a sudden?
    With the use of sga_target and sga_max_size, the memory allocation for the cache buffer will vary from time to time.. So, is it possible to quantify the amount of lost data at all (in byts, kb, mb..etc)?
    For example if the sga is set to 1gb (sga_max_size=1000mb), check point every 15mins (as we can't predict/know how often the app commits).. assume a basic transaction size for any small to medium size database. Redo logs set to 50mb (even though this doesn't come into play at this stage).
    I would be really interested in your thoughts and ideas please.
    Thanks

    All Oracle Data Manipulation Language (DML) and Date Definition Language (DDL) statements must record an entry in the Redo log buffer before they are being executed.
    The Redo log buffer is:
    •     Part of the System Global Area (SGA)
    •     Operating in a circular fashion.
    •     Size in bytes determined by the LOG_BUFFER init parameter.
    Each Oracle instance has only one log writer process (LGWR). The log writer operates in the background and writes all records from the Redo log buffer to the Redo log files.
    Well, just to clarify, log writer is writing committed and uncommitted transactions from the redo log buffer to the log files more or less continuously - not just on commit (when the log buffer is 10mb full, 1/3 full, every 3 seconds or every commit - whichever comes first - those all trigger redo writes).
    The LGWR process writes:
    •     Every 3 seconds.
    •     Immediately when a transaction is committed.
    •     When the Redo log buffer is 1/3 full.
    •     When the database writer process (DBWR) signals.
    Crash and instance recovery involves the following:
    •     Roll-Forward
    The database applies the committed and uncommitted data in the current online redo log files.
    •     Roll-Backward
    The database removes the uncommitted transactions applied during a Roll-Forward.
    What also comes into play in the event of a crash is MTTR, for which there is a advisory utility as of Oracle 10g. Oracle recommends using the fast_start_mttr_target initialization parameter to control the duration of startup after instance failure.
    From what I understand, uncommitted transactions will be lost, or more precisely undone after an instance crash. That's why it is good practice to manually commit transactions, unless you plan to use SQL rollback. Btw, every DDL statement and exit from sqlplus implies an automatic commit.
    Edited by: Markus Waldorf on Sep 4, 2010 10:56 AM

  • "Bad data format" when reading txt file from the presentation server

    Hello,
    I have a piece of code which reads a txt file from the presentation server to an internal table like below:
    DATA : lv_filename type string.
    lv_filename = 'C:\abap\Test.txt'. "I created a folder called abap under C:\
    CALL method CL_GUI_FRONTEND_SERVICES=>GUI_UPLOAD
    EXPORTING
       FILENAME              = lv_filename
    CHANGING
       DATA_TAB            = lt_tsd. " lt_tab has the exact same fields as the Test.txt's. Test.txt has only one line, tab delimited.
    When running this code, exception BAD_DATA_FORMAT is issued.
    Is it because of the file encoding or delimiter or other reason?
    Thanks,
    Yang

    Hello,
    If its tab delimited then use the has_field_seperator parameter and check
    DATA : lv_filename type string.
    lv_filename = 'C:\abap\Test.txt'. "I created a folder called abap under C:\
    CALL method CL_GUI_FRONTEND_SERVICES=>GUI_UPLOAD
    EXPORTING
       FILENAME                = lv_filename
       FILETYPE                 = 'ASC'
       HAS_FIELD_SEPARATOR          = u2018Xu2019
    CHANGING
       DATA_TAB            = lt_tsd.
    Vikranth

  • Weird issue: Partial data inserted when reading from Global temporary table

    I have a complex sql query that fetches 88k records. This query uses a global temporary table which is the replica of one of our permanent tables. When I do Create table..select... using this query it inserts only fewer records (66k or lesser). But when I make the query point to the permanent table it inserts all 88k records.
    1. I tried running the select query separately using temp and perm table. Both retrieves 88k records.
    2. From debugging I found that this problem occurred when we were trying to perform a left outer join on an inline view.
    However this problem got resolved when I used the /*+ FIRST_ROWS */ hint.
    From my limited oracle knowledge I assume that it is the problem with the query and how it is processed in the memory.
    Can someone clarify what is happening behind the scenes and if there is a better solution?
    Thanks

    user3437160 wrote:
    I have a complex sql query that fetches 88k records. This query uses a global temporary table which is the replica of one of our permanent tables. When I do Create table..select... using this query it inserts only fewer records (66k or lesser). But when I make the query point to the permanent table it inserts all 88k records.
    1. I tried running the select query separately using temp and perm table. Both retrieves 88k records.
    2. From debugging I found that this problem occurred when we were trying to perform a left outer join on an inline view.
    However this problem got resolved when I used the /*+ FIRST_ROWS */ hint.
    From my limited oracle knowledge I assume that it is the problem with the query and how it is processed in the memory.
    Can someone clarify what is happening behind the scenes and if there is a better solution?
    Thanksmight specifics be OS & Oracle version dependent?
    How to ask question
    SQL and PL/SQL FAQ

  • How to avoid data loss when an action is perfomed ....

    hi,
    I am using a dynamic Tab. Each tab contains a seperate jsp page.( the jsp page is included for the corresponding tab). Each page can contain more than 25 fields. The problem is for example i will select some check box in the first tab and i will go the second tab and i will do some insert operations , when i came back to the first tab , the checkbox which i had selected or the text what i had entered should be there. If it can be solved by using AJAX, pls guide me.
    Tools i am using : jsp, struts.
    Looking forward to hear to solve this problem.

    hi....
    c through that when u eturn to tab.... set the form values to the page
    i mean if u r using a form bean for ur jsp. use name name atrribute of the sturts html tag. and give the form bean name to the name attribute i hope this would solve your problem
    thaks
    with rgards
    shekhar

  • Safest Way to Penetration Test an Oracle DB with Potential Data Loss

    Hi,
    I was wondering what the safest way to protect Oracle from data loss when running a web application scan. We currently have an external company about to perform a web application scan and they warned us of potential data loss. However, we can't afford much downtime and our storage doesn't support things such as Copy on Write. What would you recommend? Do you think that something like putting the database in read-only mode for the duration of the test (2 hours) and enabling audit on all actions would be sufficient (we could then review the audit to see if any unauthorized calls were made)? Thanks.

    If not running live you might consider restoring your database to before the test. But you need to have confidence this would work.
    I assume your running live for the duration of the test.
    Going read only might invalid the test, and your application might not be able to run read only without generating errors.
    Examine and be aware of the flashback technologies available to you at your database version and which ones might be useful. In this context increase undo space/retention target might be helpful but dont dash off doing something at last minute.
    Ensure you have checked out how to use logminer.
    Consider not continously updating and standby database you have until test is complete.
    Ensure your more recent backup is successful and you have checked your restore procedures and have contingency places in place.
    In practice the web peneration test may attempt to change a small amount of data in a small number of records, but the agreement probably means they are not liabable if they dropped schema in the database!
    If you have to correct data following their test then do so carefully. Doing the wrong thing (especially in a panic) can make a sitation worse, especially if you are doing something you are not familiar with. Often it may be better the data loss through the application itself.
    If you do turn on auditing be aware of what it gives you before you turn in on, and beware any space implications.
    I notice your are recently registered on the site ... this may mean you dont have much experience with oracle, you may be more of a system administrator for instance. No disrespect in that whatsoever. However especially if this is the case then remember in my opinion dashing to change something last minute statisically often does more harm than good overall and may be harder to undo.
    Hope this helps.
    bigdelboy
    Edited by: bigdelboy on 28-Mar-2009 01:18
    Edited by: bigdelboy on 28-Mar-2009 01:22

  • External Drives for Mac Experiencing Data Loss with Maverick OS -- UPDATED FOR NOVEMBER 6, 2013

    --- Updated November 6, 2013 ---
    On October 30th, 2013 Western Digital informed registered customers of affected products via E-mail regarding reports of Western Digital and other external HDD products experiencing data loss when updating to OS X Mavericks (10.9).  Our investigation to date has found that for a small percentage of customers that have the WD Drive Manager, WD Raid Manager and/or WD SmartWare software applications installed on their Mac, there can be cases of a repartition and reformat of their Direct Attached Storage (DAS) devices without customer acknowledgement which can result in data loss.  
    WD has been tracking this issue closely through our WD Forum and through our Technical Support hotline and the occurrence rate of this event has been very low.  A specific set of conditions and timing sequences between the OS and the WD software utilities has to occur to cause this issue.  Should this event occur, the data on the product can likely be recovered with a third party software utility if the customer stops using the device immediately after the OS X Mavericks (10.9) upgrade.  WD will be issuing updated versions of these software applications that resolve this issue.
    WD strongly urges our customers to uninstall these software applications before updating to OS X Mavericks (10.9), or delay upgrading until we provide an update to the applications.  If you have already upgraded to Mavericks,  WD recommends that you remove these applications and restart your computer.  If you have already upgraded to Mavericks and are experiencing difficulty in accessing your external hard drive,  please do not save anything to the drive, disconnect the drive from your computer, and contact Western Digital Customer Service at http://support.wdc.com/contact/.
    --- Updated November 5, 2013 ---
    There are reports of Western Digital and other external HDD products experiencing data loss when updating to Apple's OS X Mavericks (10.9).  Western Digital is urgently investigating these reports and the possible connection to the WD Drive Manager, WD Raid Manager and WD SmartWare software applications. 
    Until the issue is understood and the cause identified, WD strongly urges our customers to uninstall these software applications on their systems before updating to OS X Mavericks (10.9), or delay upgrading.  If you have already upgraded to Mavericks, WD recommends that you remove these applications and restart your computer. WD has removed these software applications from our web site solely as a precaution as we investigate this issue.
    If you have already upgraded to Mavericks and are experiencing difficulty in accessing your external hard drive, please do not save anything to the drive, disconnect the drive from your computer, and contact Western Digital Customer Service at http://support.wdc.com/country/ for further assistance.
    You can now download the WD Software Uninstaller.  This utility will remove Mac WD SmartWare and WD Drive Manager software.  You can find the uninstaller under any of the Mac Drive Downloads sections such as the My Book Studio below.
    http://support.wdc.com/product/download.asp?groupid=124&sid=214&lang=en

    I agree. After installing Mavericks I was troubleshooting and reinstalling drivers for days. Many Apps did not work anymore, although the updates slowly arrive. As total divergence of the old Apple philosophy, I had to use endless library cleaning terminal commands to get a new Canon network printer running again. Canon provided the procedures after updating from OS X10.6 to 10.7 already. Now it seems, that the first time I used the Super Drive (CD-DVD Drive) trying to burn an audio CD with baffling error messages (Drive already used..). After this, the RAID1 status of the two MyBook archives changed to JBOD. The changes of OS X10.8 to 10.9 I find unnecessary (iBooks could be an App, Maps we have already the same on other channels). Some changes are even a step back (calendar graphics), the so much more user friendly Office suite iWorks is free, but degraded an of limited use!MadOverlord wrote:
    I have had multiple cases of data loss on WD drives since upgrading to Mavericks, and I do not use any WD software. I was using a 4-bay PROBOX USB3 enclosure with 4 independant drives, each with 1 volume, no RAID. I have managed to copy large files off the WD drive onto my MBP internal drive using the finder, and then find that they are not identical. This problem is intermittent, does not generate any Finder errors, and the drives all show 100% health via SMART. The configuration was rock-solid before Mavericks, and has trashed the directories of 4 drives since I upgraded last week. I am attempting to find a solid replication case for this, but it is difficult. I have not been able to replicate the issue on another 2-bay USB2 dock that I have (different manufacturer). One thing is clear: only one thing changed -- I upgraded to Mavericks. 

  • Question: Will non committed persistence data loss if my application which is using Kodo/JDO crashes???

    Hi,
    I am very new to JDO and Kodo and I am still learning. I have a user
    specification that requires no data loss when the application crashes. If I
    am developing my application using Kodo for data access layer, when my
    application crashes just because and needs to restart, what happen to all
    the persistence data that have not committed to database??
    Vivian

    I am very new to JDO and Kodo and I am still learning. I have a user
    specification that requires no data loss when the application crashes.
    If I am developing my application using Kodo for data access layer, when
    my application crashes just because and needs to restart, what happen to
    all the persistence data that have not committed to database??If an app crashes, all current transactions will be aborted. There is a
    difference between data loss and aborting the current transaction. Data
    loss implies losing some persistent data -- data that resides in the
    database. That won't happen with Kodo.
    You will, however, lose any changes that have not been committed to the
    database yet. This is a good thing. You absolutely DO NOT want an
    unfinished transaction to be recorded, because that could violate the
    integrity of your data. Consider a transaction that decrements from one
    bank account and increments another to implement a funds transfer. You
    certainly wouldn't want to record the decrement unless you are absolutely
    sure the increment would be recorded too!

  • Data Loss in DB to DB Transformation in ODI

    Hi,
    I am facing data loss when I am trying a transformation for a DB to DB mapping in ODI.
    I have two tables in two different schemas with the following specifications. In ODI designer model of i have put the type of place as number in target and place as varchar2 for source and accordingly done the mapping.It works successfully when i am putting the data as ('12', 'ani', '12000', '55').
    Now for testing I am giving the datas as ('1', 'ani', '12000', '55') and ('2', 'priya', '15000', '65t') and when I am executing it is giving the error as expected(ORA-01722: invalid number) in the task (Insert flow into I$ table). My C$ table is populated with the datas from source. But E$,I$ and target tables are not populated with the data.
    Now when I am puttting data in source as ('3', 'shubham', '12000', '56') and ('4', 'shan', '12000', '59') it is getting completed successfully , datas from C$ tables are deleted and data is inserted into the target table.
    Now my question is where are the datas ('1', 'ani', '12000', '55') and ('2', 'priya', '15000', '65t') gone. If they are lost what is the recoverable table so that no data loss takes place.
    The codes for source and target tables are as follows:
    source table code:
    CREATE TABLE "DEF"."SOURCE_TEST"
        "EMP_ID"   NUMBER(9,0),
        "EMP_NAME" VARCHAR2(20 BYTE),
        "SAL"      NUMBER(9,0),
        "PLACE"    VARCHAR2(10 BYTE),
        PRIMARY KEY ("EMP_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" ENABLE
    inserted data:
    INSERT INTO "DEF"."SOURCE_TEST" (EMP_ID, EMP_NAME, SAL, PLACE) VALUES ('1', 'ani', '12000', '55')
    INSERT INTO "DEF"."SOURCE_TEST" (EMP_ID, EMP_NAME, SAL, PLACE) VALUES ('2', 'priya', '15000', '65t')
    Target table code:
    CREATE TABLE "ABC"."TARGET_TEST"
        "EMP_ID"     NUMBER(9,0),
        "EMP_NAME"   VARCHAR2(20 BYTE),
        "YEARLY_SAL" NUMBER(9,0),
        "BONUS"      NUMBER(9,0),
        "PLACE"      NUMBER(9,0),
        PRIMARY KEY ("EMP_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" ENABLE
    Thanks.

    So, first you have data in "DEF"."SOURCE_TEST".
    You then run your interface, and the data is moved into "ABC"."TARGET_TEST" if the interface executes successfully with no errors.
    Correct? - no data loss
    But if you're saying that you need to handle records which are going to cause the "invalid number" error, then you should read up on 'flow' and 'static' control and how to flag errors before loading them. Flow and Static Control allows ODI to identify erroneous records prior to loading - they'll be put in the E$ table for you to deal with later.
    If you haven't already, I'd encourage you to take a look at the documentation on this:
    Implementing Data Quality Control

  • Is it possible to import data into a Reader Extended PDF created in Livecycle?

    Hi there,
    I make a lot of Reader Extended forms for my company. When I issue form updates, sometimes staff have to re-copy or re-type records into forms with repeating subforms.  I'd like to be able to jumpstart their process by importing from an old, filled-out form, into my new updated form.
    I've found I can do this by making an XML data file of the data from the old form, but I can't import into an Extended from using Adobe Reader. Most of our staff don't have Acrobat, so they aren't able to save data into non-extended forms.
    Is this clear, what I'm asking? Import XML data into Reader Extended form.
    Thanks for any help you can give,
    Laura

    Hi,
    It's indeed possible to import data in a reader enabled form with Reader.
    Here's a sample:
    LiveCycle Blog: XML per Skript in Adobe Reader importieren//Import XML via Script into Adobe Reader

  • Importing XML Data into a Reader Extended PDF

    I'm using designer 9 to create a PDF that will import an XML document on initialization and will use that document to populated multiple dependent dropdown lists.
    That's working fine in preview when I use xfa.datasets.data.mydataset to access the XML document and proceed to manipulate the XML with e4x.
    Now I have the form working correctly in designer preview and I want to reader extend the form and configure the data connection to access the XML file dynamically instead of using the preview data.
    I've tried a few methods of accessing the data via a reader extended form and they have all failed.
    I've tried to call importData("local filename") and access the XML file that way. That appears to fail silently.
    I've also tried to embed the XML in a hidden form field and the XML document length apparently exceeds the max character limit so I can't do that.
    So how can I import the XML document on form initialization either through a) importing the data(ideally through a web URL)  or b) embedding the XML document directly in the form somewhere?  Thanks.

    Hi,
    It's indeed possible to import data in a reader enabled form with Reader.
    Here's a sample:
    LiveCycle Blog: XML per Skript in Adobe Reader importieren//Import XML via Script into Adobe Reader

Maybe you are looking for

  • Consolidate linked files in multitrack?

    Is there an easy way to consolidate all of the linked files referenced in a multitrack document into a single folder, something like the "package" function in inDesign?

  • Continuing with html and jsp

    I have another problem, I am trying to print an image from my database into the img src areas...can someone please help me on this...I have hardcoded the image into the areas where i want them...so GK would be in the first row, the defenders in the n

  • Problems with integracao of the JDeveloper with Software Configuration Management (S

    1) Error with JDeveloper 9i and Software Configutarion Management (SCM) I have a problem with the JDeveloper 9i integrated with the repositorio 6i (SCM). When I make checkout of a source in java of the repositorio, after that I make the alteration an

  • Export specified pages as jpg at defined resolution

    Hello inDesign Community - I have a script that allows me to export pages specified from a dialog prompt as jpgs at a specific resolution (500px wide). This is great for making thumbnails, however It uses the dimensions of the active document page to

  • How to turn off Private Browsing

    Hhow do I turn off Private Browsing in Safari?  I go to Safari, and I do not have a toggle switch or any type of option to turn it in or off.  Help! Thanks! liz