NotAllowedError - Reader 8 vs 7 inconsistency

This is the loose transcription of a message already posted at the Planetpdf Forum. There became evident to me, that there are serious errors in the JS documentation, and there are also unpredicted changes in the JS behavior, so I really don't know whether it is worthwhile to invest in such an unreliable stuff anymore.
I have a problem with a PDF, which I authored in Acrobat 7.0 Professional (WinXp). The PDF uses JavaScript to achieve some graphical effects. The problem is, it works just fine when viewed in Adobe Reader 7, but no longer works in Reader 8.
This weird behavior is due to some apparently new Security Settings within Adobe Reader 8:
Reader8 will refuse to set the .rect property for a ScreenAnnotation.
There is a sample PDF for download at Planetpdf:
http://forum.planetpdf.com/wb/default.asp?action=9&read=60443&fid=34
If you enable the "Show console" functionality within Reader8, you will see that the following error is thrown:
NotAllowedError: Security settings prevent access to this property or method.
ScreenAnnot.rect:5:Screen theScreenAnnot:Mouse Up
These are my findings so far:
- the ScreenAnnot.rect documentation in the "JavaScript for Acrobat API Reference 8.1" does not make any reference to any rights or security-related issues.
- according to my experiments so far, the problem is NOT related to the Document Restrictions settings, but who knows..
Currently, it seems to me that I have no other option than rewriting my huge JS code and use Fields instead of ScreenAnnots. That would mean a lot of work, but I'll have to put aside some quite important features as well, which Fields simply don't provide.
I don't think there are any workarounds, but if you know one, please let me know.
Thanks for your time.

Actually the JS API Reference for ScreenAnnot.rect does say it is restricted. If you'll notice the "D" in column 2 of the quickbar and then read what that "D" means, this is what the documentation says:
Writing to this property or method dirties (modifies) the PDF document. If the document is subsequently saved, the effects of this method are saved as well. (In Adobe Reader, the document requires specific rights to be saved.)
So if the document does not have save rights, Reader will not be able to modify the property. I am getting this from the JavaScript for Acrobat API Reference Version 8.1 (April 2007).

Similar Messages

  • HT204384 My SD card wont read in FInder but will show in disk utility, any advice?

    Hi, my card reader started to become inconsistant over the last week and it has now stopped reading cards all together. It will show up in Disk utility so i can format it etc but it just wont show in the Finder section. When i remove the card without prompting, i get the 'Failed to eject properly' message. Any advice would be appreciated. I only purchased in Nov 2011 brand new so not sure what operating system it is but you may know from date of purchase.
    Thanks
    Rob

    I had this problem too on my MBP mid-2012 running Mavericks.  SD cards (in MBP slot or USB reader) would not show up in Finder.  I was able to see them mounted in Disk Utility.  What tipped me off that they were in fact mounting (and led me to check Disk Util) is that the OS complained of improperly ejecting the sd card when I removed it.
    To clarify what BDAqua said, but I didn't understand until I saw another thread, here's what fixed my problem:  Click on 'Finder' 'Preferences', select Sidebar tab and tick the box 'external disks' in the Devices section.
    Now my sd cards show up.  Thanks!!

  • Accessible Reports for JAWS Screen Reader

    Hi there,
    I'm developing some Reporting Services reports to be deployed in SharePoint (Integration Mode). However about 30% of my user base are blind and therefore use JAW screen reader to access these reports. The only solution I found online was to add an HTML render
    extension with accessibletablix enabled. This solution implies that users will have to export the report to HTML4.0 to have access to the accessible reports. This solution is however not working because all reports using grouping are read in a very inconsistent
    way by JAW reader after it is exported to HTML4.0. 
    I therefore decided to create accessible version of each report to make JAWS read the report in a moderately accessible way (Although not the best). The problem is that even with the accessible version, users still have to export to HTML4.0, which is definitely
    not a good solution in terms of user experience. Is there a way to enable the accessibletablix in the report such that the default report view in SharePoint will be accessible without the need to export to HTML4.0?
    Are there some tips on creating accessible report with basic groups?
    Thanks 
    Dan

    Hi Doris,
    I'm using SharePoint 2013, Reporting Services 2012. I'm not sure about the report viewer. The download link is not working.
    The reports are showing well but JAWS screen reader reads it without mentioning the column headers unless it is exported to accessible HMTL version. And Tables containing groups with subtotal are not read well even when it's exported. How do I enable accessibletablix
    for default report without the need to export.
    I created the report using report wizard.
    This is what I used to enable accessibletablix but it only works when exported.
    open SharePoint powershell
    Get-SPRSServiceApplication, get the ID value
    New-SPRSExtension -identity  ID value  -ExtensionType "Render" -name "HTMLRS" -TypeName "Microsoft.ReportingServices.Rendering.HtmlRenderer.Html40RenderingExtension,Microsoft.ReportingServices.HtmlRendering"
    -ServerDirectives "<OverrideNames><Name Language='en-US'>HTMLRS</Name></OverrideNames>" -ExtensionConfiguration "<DeviceInfo><AccessibleTablix>true</AccessibleTablix></DeviceInfo>"
    IISRESET

  • Error when exporting IDOC from one XI system to another

    Hi all,
    When I do an export for an IDOC from my one XI server to be able to import it into another I get the error below. The funny thing is that it works perfectly for my COND_A idoc but not at all for DEBMAS and MATMAS idoc.
    Any ideas?
    <b>Error during export. Internal error during pvc call: SAP DBTech JDBC: Result set is positioned after last row.</b>
    Regards,
    Liesel

    Hey,
    I guess you are using MAXDB,
    please read note number 1055246
    (Inconsistent XI content caused by bug in MAX DB)
    Good luck!

  • What's happening to my MacBook?!?!?!

    My MacBook recently starting acting funny.
    Two days ago, I'm using it and it just completely shut off. I pressed the power button to turn it on again, I heard the startup chime, then I heard a "click" and it shut off again (no it wasn't the CD drive). This happened like 10 times but started up fine when I plugged it back into the power adapter.
    Today, I turned it on and took like 7 minutes for it to boot. Then it showed "Safe Boot" at the login screen. I never pressed anything while booting so how did it boot to "Safe Boot?" So I logged in and I noticed the following:
    1) AirPort is gone... It doesn't show in the menu bar or in the Network option in System Preferences.
    2) No sound! The volume adjuster in the menu bar is gone AND the volume buttons on the keyboad do not work. I went to the Sound tab in System Preferences and it says "No Output Devices Found" and "No Input Devices Found."
    3) The backlight buttons do not work either. If I press it, it just show that it is on the lowest setting and won't move.
    Also, My MacBook shut off by itself once today.
    I took screenshots but I don't know how to add them to this message so if someone tells me how, I can add screenshots of what is happening.
    Anyways, someone please help me with these issues! I have invested a lot in my MacBook and I don't want it to have tons of problems like my iBook did.

    Could be your safe mode prevent your device to run, since it boot using the very minimum driver to eliminate and provide more conflict.
    From what I read you have an inconsistent battery, and it just shut you off and won't turn back on since it doesn't have anymore juice in it.
    Try to calibrate your battery, reset the NV-RAM, and also the PMU.
    http://docs.info.apple.com/article.html?artnum=86284
    http://docs.info.apple.com/article.html?artnum=2238
    http://docs.info.apple.com/article.html?artnum=303319
    Also try to repair permission using your installer disk and repair both disk and permission.
    Hope this simple task will eliminate your problem.
    Good Luck

  • Oracle 11g - How to repair block corruption(on free space) in datafile

    Hi,
    I have a tablesopace with 3 datafiles, out of which one datafile has corrupted block. But no objects or data is affected as the corrupted block os in free space. This was shown in the alert logs.
    Please see below the details:
    Wed Apr 06 15:30:04 2011
    SMON: Restarting fast_start parallel rollback
    SMON: ignoring slave err,downgrading to serial rollback
    ORACLE Instance geooap (pid = 12) - Error 1578 encountered while recovering transaction (10, 6) on object 149755.
    Errors in file f:\oracle11g\diag\rdbms\geooap\geooap\trace\geooap_smon_5540.trc:
    ORA-01578: ORACLE data block corrupted (file # 7, block # 54053)
    ORA-01110: data file 7: 'F:\ORACLE11G\ORADATA\GEOOAP\ORDER_DATA_01.DBF'
    GEOAP:
    Fri Apr 01 14:57:48 2011
    Errors in file f:\oracle11g\diag\rdbms\geop\geop\trace\geop_arc1_2156.trc:
    ORA-00235: control file read without a lock inconsistent due to concurrent update
    Fri Apr 01 14:57:58 2011
    ================================================================
    The corruption is being reported in a free space block of the ORDER_DATA_01.DBF.
    I’ve checked all the tables (and indexes) in this tablespace and none report corruption.
    =====================================================Is there any action I need to take to remove corruption at this point?It is not affected any operation on the database yet.
    What is the best way to do get rid of the corrupt block, without dropping and rebuillding the full tablespace(which is around 6 GB -total of 3 datafiles)
    Thanks a lot

    Can RMAN recover the datablock from this cold backup(which is a week old, the data file was not corrupted then) ?Please note that to do the recovery, you would need the backup and the archivelog files since the backup. Think about what you are asking to do. Its a single block whose recovery you are asking from a week old backup which obviously would be on an much older SCN compared to the rest of the database. How would you make that block consistent with the rest of the datafile. If you don't have archivelog in that db whose block is corrupted, you may forget that block and any data that it might ever had. Also, please read the documentation about the block recovery which explains the requirements very clearly,
    http://download.oracle.com/docs/cd/E11882_01/backup.112/e10642/rcmblock.htm#BRADV89784
    From the above link, 1st point,
    The target database must run in ARCHIVELOG mode and be open or mounted with a current control file.HTH
    Aman....

  • Data collection was switched from an AI Config task writing to an hsdl file to synchronized DAQmx tasks logging to TDMS files. Why are different readings produced for the same test?

    A software application was developed to collect and process readings from capacitance sensors and a tachometer in a running spin rig. The sensors were connected to an Aerogate Model HP-04 H1 Band Preamp connected to an NI PXI-6115. The sensors were read using AI Config and AI Start VIs. The data was saved to a file using hsdlConfig and hsdlFileWriter VIs. In order to add the capability of collecting synchronized data from two Eddy Current Position sensors in addition to the existing sensors, which will be connected to a BNC-2144 connected to an NI PXI-4495, the AI and HSDL VIs were replaced with DAQmx VIs logging to TDMS. When running identical tests, the new file format (TDMS) produces reads that are higher and inconsistent with the readings from the older file format (HSDL).
    The main VIs are SpinLab 2.4 and SpinLab 3.8 in folders "SpinLab old format" and "Spinlab 3.8" respectfully. SpinLab 3.8 requires the Sound and Vibration suite to run correctly, but it is used after the part that is causing the problem. The problem is occuring during data collection in the Logger segment of code or during processing in the Reader/Converter segment of code. I could send the readings from the identical tests if they would be helpful, but the data takes up approximately 500 MB.
    Attachments:
    SpinLab 3.8.zip ‏1509 KB
    SpinLab 2.4.zip ‏3753 KB
    SpinLab Screenshots.doc ‏795 KB

    First of all, how different is the data?  You say that the reads are higher and inconsistent.  How much higher?  Is every point inconsistent, or is it just parts of your file?  If it's just in parts of the file, does there seem to be a consistent pattern as to when the data is different?
    Secondly, here are a couple things to try:
    Currently, you are not calling DAQmx Stop Task outside of the loop; you're just calling DAQmx Clear Task.  This means that if there were any errors that occured in the logging thread, you might not be getting them (as DAQmx Clear Task clears outstanding errors within the task).  Add a DAQmx Stop Task before DAQmx Clear Task to make sure that you're not missing an error.
    Try "Log and Read" mode.  "Log and Read" is probably going to be fast enough for your application (as it's pretty fast), so you might just try it and see if you get any different result.  All that you would need to do is change the enum to "Log and Read", then add a DAQmx Read in the loop (you can just use Raw format since you don't care about the output).  I'd recommend that you read in even multiples of the sector size (normally 512) for optimal performance.  For example, your rate is 1MHz, perhaps read in sizes of 122880 samples per channel (something like 1/8 of the buffer size rounded down to the nearest multiple of 4096).  Note: This is a troubleshooting step to try and narrow down the problem.
    Finally, how confident are you in the results from the previous HSDL test?  Which readings make more sense?  I look forward to hearing more detail about how the data is inconsistent (all data, how different, any patterns).  As well, I'll be looking forward to hearing the result of test #2 above.
    Thanks,
    Andy McRorie
    NI R&D

  • Mutexes for queue

    I am sharing a queue database amongst different processes. I only used the DB_CREATE and DB_INIT_MPOOL flags. There is always just 1 writer, and there can be several readers. There is no chance of a writer and reader trying to access the same record at the same time the way I have it set up, although a writer could be appending when a reader is reading elsewhere.
    My question is, since a queue uses record level locking, do I need to allocate enough mutexes for every record? If I plan on having upwards of a billion entries do I need to set the max number of mutexes to a billion? I don't even call the DB_INIT_LOCK flag so are mutexes even really being used?

    David,
    Thanks for the reply. In my situation, the writer only appends. To ensure that a reader isn't reading the last record as it is being written, I have the latest written record number in shared memory, and the reader checks its record number with that. Is there still possibility of reading data in an inconsistent state using the above mechanism without DB_INIT_LOCK ? I'm not using transactions because of the performance hit.

  • Working in Photoshop Gray to Indesign to PDF help..

    Hi,
    CAN SOMEONE HELP ME save out a PDF in the final tagged color space:  Gray Dot Gain 20%
    I am saving my Photoshop .tif files with embedded profile: Gray Dot Gain 20%
    My InDesign Working CMYK is: US Web Coated SWOP v2
    I place my .tif files in InDesign.
    What is confusing me is InDesign's no Gray settings.
    My first Question is TARGETING my PDF to the desired print space:  Gray Dot Gain 20% — THERE IS NO GRAY CONVERSION OPTION!
    As my InDesign book is in Working CMYK, and my placed .tif files are in Gray Dot Gain 20% — I have determined I will need to Convert to my print profile in Acrobat — but what would be the best Adobe PDF Preset setting to get my tagged CMYK and tagged Gray into a PDF so I can Convert Colors to Dot Gain 20% in Acrobat?
    No Gray support in In Design has be baffled — I am thinking I should avoid Gray all together and just feed InDesign RGB to be sure InDesign passes through my .tiff profile to Acrobat...

    Thanks, I read Petteri's and Peter's replies very carefully.
    InDesign treats grayscale images as being on the K plate of the CMYK
    working space.
    That is enlightening, yet I do not understand what it means (K v. Dot Gain 20%) — I simply want a PDF properly Converted and tagged with the "Gray Dot Gain 20%" ICC profile.
    Apparently InDesign CS4 cannot pass through Gray Dot Gain 20% properly tagged as such?
    And I will need to follow through Converting to my target print space inside Acrobat.
    I say this because, in my InDesign Export PDF process, I choose to do No Color Conversions, Embed and Preserve Profiles — then I use Acrobat Output Preview> Object Inspector to verify the profiles.
    My first tagged Gray Dot Gain 20% image reads "DeviceCMYK" in Object Inspector.
    The remaining 50 or so tagged Gray Dot Gain 20% images read "DeviceGray".
    This inconsistancy is really weird because my images are all the same tagged Gray Dot Gain 20% going into InDesign.
    All my text in my 100-page book reads as "DeviceCMYK" in Object Inspector.
    I cannot follow that logic (what happened to my profiles and why is my top image now reading as untagged CMYK?).
    If I then use Acrobat Convert Colors "Convert to Profile> Dot Gain 20%> Embed profile — then Acrobat's Object Inspector reads all my images and text as "ICCBasedGray Dot Gain 20%" as desired.
    with greyscale images, I create them in photoshop and embedd grey
    icc-profile into them while saving. Then I place them to Indesign and
    when I export PDF, I choose a CMYK profile
    A PDF Conversion would always need to be based on a Source Profile(s) — the mystery is — if InDesign's Export PDF process is not tagging my images properly, then HOW can it make a proper follow-up Conversion to my print space?
    The only hope I have of making sense of this is if I can follow the color management chain and end up in the desired tagged print space.
    +++++++
    In my four previous books, I've Placed my tagged Gray Dot Gain 20% images in InDesign, Exported to PDF to No Color Conversions, Embed and Preserve Profiles, and handed off CMYK PDFs (apparently based on my Working CMYK profile).
    Those printed beautifully, but I am trying to learn and it makes sense I could Convert my PDF to the 100% print space.
    +++++++
    It is possible to convert your CMYK PDF to Dot Gain 20% in Acrobat, but I
    was not happy with the appearance on screen the last time I tried it ( I
    think I lost my 100% K levels)
    That sounds like Acrobat was basing the Conversion on a wrong Source Profile (which is my concern for not being able to verify the profiles it is spitting out in my PDF tests).
    It seems if I Converted my InDesign book to sRGB, and Converted all my Gray images to sRGB — that InDesign could Export a tagged sRGB PDF that Acrobat could Convert to Gray Dot Gain 20% (and I could follow the chain).
    Then again, I wish I knew what I was doing here...

  • NiDMM_IsOverRange

    The "niDMM Is Overrange" Function Does Not Return a True Value When The Signal is Over Range
    Basically, the way the document reads the behavior is inconsistent and/or unpredictable. The doc does not specify what instruments/ranges/measurements the IsOverrange function will work reliably with (other than the one "For instance".)
    However, this KB is for NI DMM 2.5. Does anyone know if this is been resolved in the later revisions? I have a 4070 with 3.1 and it "appears" to work when I test it on the bench but the wording on the KB makes it sound like it shouldn't be trusted - very ambiguous wording: "Somtimes" and "Some modules". 
    Solved!
    Go to Solution.

    Hi Marc,
    When a measurement is greater than the range specified, but not greater than 105% of that range, it is return normally and a warning will be returned. If a measurement is greater than 105% of the reange specified, then the measurement will be replaced with an NaN and a warning will be returned. Additionally, if a masurement is under range, it will be replaced with -Inf and a warning will be return.
    niDMM_IsOverRange simple checks a value for NaN sets the boolean to true if the value is NaN. So, if a measurement is between 100% and 105% of the range, the only indication you will get is from the warning return by niDMM_Read/niDMMReadMultiPoint/niDMMReadWaveform.
    Brian
    Staff Software Engineer
    Modular Instruments R&D
    National Instruments

  • Automatic Re-capture in Captivate

    Years ago i worked with Captvate 3. Here i had the option make one capture of the english version of our software.
    I could then have Captivate re-record the same capture in our German version of the software and so on.
    Is that feature still present in Captivates newer versions. As far as i can tell from searches it disappered in version 5.
    Or do i need to get a hold of an old version to do this.

    Dear Lata
    Please read thread:
    [Product version inconsistent with software components;
    Looks like an issue with the product assignment or other data for your managed SAP system.
    You can use the landscape verification 1.0 addon for SAP Solution Manager to verify the consistency of your ECC sandbox in the SAP Solution Manager. It can hint you what is not yet correct.
    Kind regards
    Tom

  • Concurrent DB Locking

    Hi,
    I am using the Berkeley's CDB type to create a database. The access method is BTree. In the application, typically there will be a single writer process that keeps inserting/updating records in this DB. There is another reader process (multi-threaded). Each of the threads in this process opens a READ Cursor on this DB, fetches some records based on certain filters and closes the cursor when done.
    What happens here is that when any of the reader threads opens the Cursor, the Wrter process is blocked, unless the Reader thread closes the Cursor. This means that the Reader creates a database-level Read-lock. How can this be optimized?
    Another question here is that, if any Reader thread is killed, the writer is blocked indefinitely. I know that there should be a dead-lock detector run periodically which will free locks appropriately. But without this dead-lock detector running, isnt there an optimized way of implementing a solution for this?
    Regards,
    Ravi Nathwani

    Hello,
    From the documention:
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/cam/intro.html
    Read cursors may open and perform reads without blocking
    while a write cursor is extant. However, any attempts to
    actually perform a write, either using the write cursor
    or directly using the DB->put or DB->del methods, will block
    until all read cursors are closed. This is how the
    multiple-reader/single-writer semantic is enforced, and prevents
    reads from seeing an inconsistent database state that may be an
    intermediate stage of a write operation.
    This sounds like your situation.
    Two of the sequences which can cause a thread to block itself
    indefinitely are:
    1. Keeping a cursor open while issuing a DB->put or
    DB->del access method call.
    2. Not testing Berkeley DB error return codes (if any
    cursor operation returns an unexpected error, that cursor
    must still be closed).
    Could you be running into these?
    Thanks,
    Sandra

  • DB_SECONDARY_BAD, how to resolve it?

    I use berkeleyDB in multithreads.
    The primary DB using ip as key , struct D as data, D contains time, which is the key of seondary DB.
    The secondary DB using time as Key.
    one thread do operations of inserting or updating struct D (incude time in D) with primary DB .
    another thread do operation of retrivaling some data in secondary DB with key time.
    I set DB_CREATE | DB_INIT_LOCK | DB_INIT_MPOOL | DB_THREAD | DB_INIT_TXN in enviroment,
    and the primary DB and secondary DB using the same enviroment.
    But I frequently got DB_SECONDATY_BAD:Secondary index inconsistent with primary when runing my program.
    Is there some wrong when I init bdb enviroment? Should I add some other flags to make data consistent between Primary and Secondary DB?
    Thanks

    Thanks for replys
    here I list the frame of my code, I delete error checking code to make it looks clean
    DB_ENV *env;
    DB p_db, s_db; //primary and secondary db
    /* init primary and secondary db */
    void init_bdb()
    u_int32_t env_flags = 0;
    db_env_create(&env, 0);
    /* I want to use TXN in primary db */
    env_flags = DB_CREATE | DB_INIT_LOCK | DB_INIT_MPOOL | DB_THREAD | DB_INIT_TXN | DB_DUP | DB_RECOVER | DB_INIT_LOG;
    env->set_lk_max_locks(env, 5000);
    env->set_lk_detect(env, DB_LOCK_MINWRITE);
    env->open(env, "db_directory", env_flags, 0);
    int flags = DB_CREATE | DB_THREAD | DB_AUTO_COMMIT;
    /* primary db */
    db_create(&p_db, env, 0);
    p_db->open(p_db, NULL, "priamry.db", NULL, DB_BTREE, flags, 0644);
    /* secondary db */
    db_create(&s_db, env, 0);
    s_db->set_bt_compare(s_db, time_cmp);
    s_db->open(s_db, NULL, "secondary.db", NULL, DB_BTREE, flags, 0);
    /* callback function is not listed in this code */
    s_db->associate(s_db, NULL, s_db, callback, 0);
    /* this is for primary thread(primary db) */
    void primary()
    int ret;
    DBT key, data;
    struct timeval *tv, current_tv;
    memset(&key, 0, sizeof(DBT));
    memset(&data, 0, sizeof(DBT));
    data.flags = DB_DBT_MALLOC;
    /* transaction */
    DB_TXN *txn = NULL;
    env->txn_begin(env, NULL, &txn, 0); //begin trasaction
    key.data = primary_key;
    key.size = sizeof(primary_key_type); //sizeof primary key
    p_db->get(s_db, txn, &key, &data, DB_RMW | DB_DBT_MALLOC);
    /* modify data or insert new data */
    p_db->put(s_db, txn, &key, &data, 0); //put modified data into db
    txn->commit(txn, 0);
    if (NULL != data.data)
    free(data.data);
    txn->abort(txn); //abort transaction
    /* secondary thread function (secondary db) */
    void secondary()
    int ret = 0;
    DBT key, data;
    DBC *cursorp = NULL;
    struct timeval *pstTv;
    memset(&key, 0, sizeof(DBT));
    memset(&data, 0, sizeof(DBT));
    s_db->cursor(s_db, NULL, &cursorp, 0);
    ret = cursorp->get(cursorp, &key, &data, DB_NEXT);
    /* retrival data using secondary db */
    for ( ; 0 == ret ; ret = cursorp->get(cursorp, &key, &data, DB_NEXT))
    /* just retrivaling data here, nerver do updating */
    cursorp->close(cursorp);
    If I just run primary thread, this code works well.
    seondary inconsistent appears when using secondary thread.
    Is some error in the code above?
    Finally, I try to change to another method, using DB_INIT_CDB | DB_INIT_MPOOL flags.
    Book <BDB_Prog_Reference> said:
    "any attempts to actually perform a write, either using the write cursor or directly using the DB->put() or DB->del()
    methods, will block until all read cursors are closed. This is how the multiple-reader/
    single-writer semantic is enforced, and prevents reads from seeing an inconsistent database
    state that may be an intermediate stage of a write operation."
    I think it means when you want to write data to db, a write mutex must be got, at the same time, any write and read operations are forbidden
    util the write mutex is released. actually, data consistent can be guaranteed.
    But it is available in multithreads?
    When using CDB, secondary inconsistent dissappeared, but I still get some datas which not existed in primary DB.
    And I want to make sure if CDB is available in multithreads and secondary db, if so. there's big possobility that some error code existed beyond bdb functions
    Edited by: user13134137 on 2010-5-20 上午5:18
    Edited by: user13134137 on 2010-5-20 上午5:33

  • Shared lock vs exclusive lock

    Hello
    I have a question.
    If we have two scott sessions. I am updating a table EMP in session 1. It means it is exclusively locked.It cannot be used by session 2. Then can we use select command on table EMP in session 2.?? This command shoul not work according to me. But it is working.
    Reply me.
    Thanks in anticipation.

    984426 wrote:
    But in shared / exclusive lock, we have a property that we can't acquire shared lock on data if it is exclusively locked and vice versa. that means readers block writers and vice versa.
    E.g. if T1 is updating a row then how T2 can read that row? If T2 is reading then that is inconsistent data as it will pick the non-updated value of that row until T1 commits.
    Please explain.
    I am having doubts in this topic.
    ThanksYou need to check back again the basic concepts from the Concepts guide. Your understanding about the Oracle's working is completely wrong.
    In Oracle, the Transaction Isolation Level is set to Read Committed . This means any kind of inconsistent data is not possible to be read by you and for that data , Oracle would create for you a consistent image using the Undo Blocks in which the old image is going to be stored till the time that transaction is not over( and even after that too for some time with conditions apply) . So if T1 is updating a row, T2 can't lock the same row as it would be locked exclusively and the S1 (another select) would be seeing an old and consistent image of that change only as long as it's not committed. What you said doesn't and won't work ever in Oracle.
    Read the first link that's given to you already.
    Aman....

  • Client variable database repository exception handling

    We have recently had 2 incidents where client variable
    database storage appears to have failed in our clustered
    environment. We have 2 servers that share a sybase 12.5.2 client
    variable repository (CDATA/CGLOBAL). The problem seems to be
    occuring when a table that the application uses gets blocked by an
    unrelated application. When this happens, client variable storage
    seems to fail - the 2 servers apparently use whatever client
    variables they have in memory rather than getting the values from
    CDATA.
    There are no exceptions thrown, the application continues to
    function normally except that each server has it's own copy of the
    client variables. When we kill the process that has the table
    locked up, the servers start using CDATA normally again.
    The page that is getting blocked by the external process does
    not update any client variables, but it does read them. The
    inconsistent client variable problem occurs on other pages, not the
    one being blocked.
    I was able to sort of recreate the problem. I found that if I
    renamed CDATA, no error is reported to the user. The database
    exception (table not found) is logged, but not thrown on the page.
    To the user the application appears to function normally. So this
    demonstrates that updates to CDATA which result in database
    exceptions fail silently and are not thrown to the page.
    Has anyone else experienced silent exceptions in CDATA
    updates? Are there any workarounds for this behavior? Our
    application relies on client variables being consistent across
    servers and really need for exceptions to be thrown to the page so
    that the user is aware.
    Additional Info:
    CFMX 7.0.2
    Windows Server 2003 SP1
    database is sybase 12.5.2

    This is one of the "classic" design problems in this kind of architecture. And, unfortunately, the answer is "it depends on how you think you need to handle it." And I'm sure there are plenty of "gurus" that will tell you one way or another is the only way to do it.
    I'll be more honest: I'll give you a couple of personal suggestions, based on experience in this architecture. These are suggestions - you may do with them what you will. I will not say this is the best, most correct, or even remotely relevant to what you're doing.
    If it's simple data validation for "typing" (e.g. String, number, Date, etc.), that is taken care of when you attempt to stuff in the information into the appropriate DTO. If it's more "sophisticated" than that (must be in a certain range, etc.), that particular checking should probably be delegated from your Controller to a helper class. That not only saves the "expense" of transmitting the information back and forth across the wire, it's "faster" to the end user so say "Ooopsie" by redirecting back to the form right then. Basically the same thing if the types are wrong.
    That only leaves the "big" problems in the business layer (EJBs), where you have to deal with concurrency, database failures, etc. Generally these kinds of exceptions are thrown back to to the Controller in one of two forms:
    1) a sublass of RuntimeException, which signals that some Very Bad Things have happened in your container. EJBException is one like that and you can see where it's being thrown from.
    2) a subclass of Exception, also called "application exceptions." They are usually something like a "duplicate record" or a validation-like error (which you mentioned) like a missing field. They're used as a signal to a failure in the logic, not the container. That way you have to decide at what layer of your architecture they should be handled and/or passed on to the next.

Maybe you are looking for

  • System Message "Linking primary sales accounts has not been completed"

    When I try to enter an incoming Payment, I get System Message "Linking primary sales accounts has not been completed".  What have I missed. Banking -> Incoming Payments -> Incoming Payments A System Message window pops up and the System Message Log w

  • Message in status 'holding' without reasonable reason

    Hi all, I will explain this 'X file' problem. We have an scenary between an R/3 --> XI --> R/3 The communication it's between an interface with .txt file; No conversion to XML, XI only does 2 steps at Interface Determination: 1º Writes in an FTP dest

  • Itunes won't sync ipod (error 54)

    I installed the new itunes and bought a new ipod and initially I had a problem getting itunes to recognize my device but after going into update driver software it now recognizes the ipod but I still can't sync! It will start to and then I get an err

  • Is there way to configure the Time for attribute Description?

    Hi All, We have a requirement that - If we provide description (Tooltip) for attribute in Fact or Dimension it is displaying in couple of Seconds. Is there any way that configure to change the Time for attribute Description (Tooltip) Display Time ? T

  • How to chance the color of the JTree lines ?

    Dear friends, how can I change te color of te line in a JTree? I tryied that: UIManager.getDefaults().put("Tree.line", Color.black);but it doesn�t works...