Corrupt table

Hi,
We have a log table that is notoriously getting corrupt.
The usage is to log all web requests, and in an batch job aggregate those requests and delete the aggregated rows.
Approx rows pr day is 400000 (during holliday...).
The table is defined as:
CREATE TABLE wisweb.DBA.Log (
  Id int NOT NULL,
  Time timestamp NULL,
  Servlet char(100) NULL,
  Params text NULL,
  Type int NOT NULL,
  TableName char(50) NULL,
  IdFieldName char(50) NULL,
  IdFieldValue int NULL,
  Referer text NULL,
  SearchString char(100) NULL,
  RecordCount int NOT NULL,
  Url text NULL,
  TableId int NOT NULL,
  ServletId int NOT NULL,
  IP varchar(50) NULL,
  UserAgent varchar(200) NULL
-- Creating indexes
CREATE INDEX "Time" ON Log ( "Time" ASC );
CREATE INDEX "Type" ON Log ( "Type" ASC );
CREATE INDEX "IdFieldValue" ON Log ( "IdFieldValue" ASC );
CREATE INDEX "RecordCount" ON Log ( "RecordCount" ASC );
CREATE INDEX "TableId" ON Log ( "TableId" ASC );
CREATE INDEX "ServletId" ON Log ( "ServletId" ASC );
CREATE INDEX "IdFieldName" ON Log ( "IdFieldName" ASC );
And when I validate the table I get:
Validate table log;
ERROR: Row count mismatch between table "Log" and index "Time"
SQL Anywhere Error -300: Run time SQL error -- Validation of table "Log" has failed
And the corresponding:
Validate index "Time" on Log;
Fails with the same error:(
I can rebuild the Time index without problem, but it still will not validate.
I can even drop the index and recreate it and it still fails validation!
And if I validate the table without the Time index, it fails on the IdFieldValue index.
Last time I had to rename table, create a new one and copy data over to the new table. But I only stayed uncorrupt for about two weeks:(
We are running the version:
dbsrv12 GA 12 0 1 3967 linux 2013/09/04 15:54:03 posix 64 production
Best regards
Ove Halseth

Ahh, I did not notice the Caution notice in the documentation:(
Then it makes sense that the table validates if we drop all indexes.
We found a bug in our aggregation routine that stopped the deletion of aggregated rows.
So when the number of rows grew, we tried to validate the table. And when that failed, we expected that to be the problem...
Thanks for the guidance:)
Ove
PS: I don't know why I can't mark your answer as correct...

Similar Messages

  • MS Access Web App: corrupted table, cannot open in Access anymore

    **tldr: How can I delete a corrupted table that prevents me from opening my Web App in Access?**
    I used the Access desktop client to create a new table with approx 20 lookup fields. When I tried saving the table, I received an error message about too many indices. So I set the index option to "no" for all the lookup fields and tried to close
    the "edit table view". However, I was not able to close the edit table view anymore. After trying for a while, I used the task manager to terminate Access.
    Now, when I try to open my app in Access by clicking the "customize in Access" button on the web, I receive several error messages:
     1. Operation failed: the table xxx contains too many indices. Delete some indices and try again. (this error message appears about 5 times)
     2. Microsoft Access can not create the table
     3. A problem occurred when trying to access a property or method of the OLE object.
    Next, I'm at the Access start screen. My application does not open.
    So, is there any other way I can delete the corrupted table without opening it through the Access client? Maybe directly accessing the SQL server? The database is configured to allow read/write connections, becasue I connected to the tables from an Access Desktop
    Database, but I'm not sure if I can delete a table or fields that way. Any help is greatly appreciated!
    [I translated the error messages from German, so they might be slightly different in the English version]

    **tldr: How can I delete a corrupted table that prevents me from opening my Web App in Access?**
    I used the Access desktop client to create a new table with approx 20 lookup fields. When I tried saving the table, I received an error message about too many indices. So I set the index option to "no" for all the lookup fields and tried to close
    the "edit table view". However, I was not able to close the edit table view anymore. After trying for a while, I used the task manager to terminate Access.
    Now, when I try to open my app in Access by clicking the "customize in Access" button on the web, I receive several error messages:
     1. Operation failed: the table xxx contains too many indices. Delete some indices and try again. (this error message appears about 5 times)
     2. Microsoft Access can not create the table
     3. A problem occurred when trying to access a property or method of the OLE object.
    Next, I'm at the Access start screen. My application does not open.
    So, is there any other way I can delete the corrupted table without opening it through the Access client? Maybe directly accessing the SQL server? The database is configured to allow read/write connections, becasue I connected to the tables from an Access Desktop
    Database, but I'm not sure if I can delete a table or fields that way. Any help is greatly appreciated!
    [I translated the error messages from German, so they might be slightly different in the English version]
    Not sure, but you may have to many indexes created, see below article:
    Indexes on a Table
    Hope its not corrupted though. If so, I hope you have a backup in place.
    Try to Compact & Repair and also check below article: 
    Recovering from Corruption
    Hope this helps,
    Daniel van den Berg | Washington, USA | "Anticipate the difficult by managing the easy"
    Please vote an answer helpful if they helped. Please mark an answer(s) as an answer when your question is being answered.

  • Error converting DOC to PDF, corrupt table

    Hello,
    Using XP, MS Word 2003, Adobe Acrobat 8 Pro.
    Trying to convert a 15MB doc to pdf.  After 14 minutes, I get a MS Word error, "This error message may be the result of a corrupt table in the current document.You can recover the contents..."  The steps for recovery involves "open & repair", which is an option that I can't find (maybe MS Word 2007?)
    If I take the original doc & compress the photos to reduce the file size to about 11MB, then the PDF is ok.  I can also split up the doc into two documents.  Each part converts ok, then I can combine them.  Both options works, but both are time consuming.  (So I'm not buying the "corrupt table" error.)
    I have other doc's that convert ok.  One is 29MB and converts in 2 minutes.  The difference that I can note is that the 15MB doc uses primary nothing but text boxes for the entire doc.  Text & photos are inserted into each text box.  A dozen of these are smaller than 11MB & will convert, but very slowly.  Over that size & I get the above error.
    I can't change Word Doc format. (customer's requirements)
    Any help or suggestions are appreciated!
    Thanks
    Mike
    (contract mfg in VT)

    There has been a change with AA9 that does the create PDF option a bit differently than before. I am not sure of the details, but if you are assuming the same process as I will describe you will at least have an idea of what is happening. In prior versions of Acrobat, there are basically 2 conversion processes. The right click in explorer to convert and opening the file in Acrobat both go back to the create PDF process in WORD, so let me just describe the WORD process. When you print to the Adobe PDF printer, then you are simply doing a conversion (or print) just like you would to paper. Excluding the printer metric issues with WORD (2007 turns these off by default), you should get a replica of the WORD file in appearance. Choices include down sampling the graphics and embedding the fonts, both of which are recommended. These are part of the printer properties.
    The end result of the print is effectively an electronic paper version of your original file. It is not recommended for editing (except for form fields and such), in just the same way it is preferred to not use whiteout on a typed paper version. The process of creating this print file is a two step process where a PS file is created (can be very memory intensive) and then the PS file is put through Distiller in the background to create the PDF (these require AcroTray to be active to do this automatically).
    PDF Maker (create PDF) adds several features to the file, but it is only available in selected applications like MS OFFICE. These features are added by including PDF Marks in the PS file created in the print process. You can include bookmarks, links, and tags for accessibility (tags tend to really bloat the file big time). The items to be included are in the preferences of PDF Maker in the application. It is also a good idea to use the Standard job settings as a minimum, but I typically recommend print or press options. I use a job settings file provided by a publisher that is optimize for journal publication.
    The fact that PDF Maker adds all the extra bits is likely why you are having problems. You can try turning all the added features off and then go back and add only the ones you need. That may help your situation. Keep in mind that graphics will be expanded (are not compressed) when sent to the PS file -- causing a huge file in some cases. The temporary storage is limited by your TEMP folder and not by the size of your hard disk and that is the route of the problem for large files.
    As I said, PDF Maker has apparently changed a bit for AA9, but the idea is likely similar.

  • Corrupt table with cachedwithin bug

    I'm suddenly getting this error in different queries when
    using cachedwithin().
    java.lang.IllegalStateException
    corrupt table
    the query looks like this but there are others:
    <cfquery datasource="#request.dsn#" name="metals"
    cachedwithin=".125">
    select m.mid, m.metal from merch m
    where m.groupid in (select groupid from merch where mid =
    #val(url.MID)#)
    </cfquery>
    I have flushed the cache from the admin without success.
    I have tried reformating the query in the hopes that it would
    purge the cache but the new cache does the same.
    This started happening after I uninstalled coldfusion beta
    and installed the public trial last night.
    Any ideas?

    It started doing it again 4 hours after restart and light
    load. corrupt table null
    here is the stacktrace:
    java.lang.IllegalStateException: corrupt table at
    coldfusion.util.LruCache.reap(LruCache.java:214) at
    coldfusion.util.LruCache.get(LruCache.java:190) at
    coldfusion.sql.Executive.getCachedQuery(Executive.java:1262) at
    coldfusion.tagext.sql.QueryTag.setupCachedQuery(QueryTag.java:708)
    at coldfusion.tagext.sql.QueryTag.doEndTag(QueryTag.java:517) at
    cfsub_cat2ecfm302901843.runPage(C:\inetpub\ultradiamonds.com\sub_cat.cfm:101)
    at coldfusion.runtime.CfJspPage.invoke(CfJspPage.java:192) at
    coldfusion.tagext.lang.IncludeTag.doStartTag(IncludeTag.java:366)
    at
    coldfusion.filter.CfincludeFilter.invoke(CfincludeFilter.java:65)
    at
    coldfusion.filter.ApplicationFilter.invoke(ApplicationFilter.java:279)
    at
    coldfusion.filter.RequestMonitorFilter.invoke(RequestMonitorFilter.java:48)
    at
    coldfusion.filter.MonitoringFilter.invoke(MonitoringFilter.java:40)
    at coldfusion.filter.PathFilter.invoke(PathFilter.java:86) at
    coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:70)
    at
    coldfusion.filter.BrowserDebugFilter.invoke(BrowserDebugFilter.java:74)
    at
    coldfusion.filter.ClientScopePersistenceFilter.invoke(ClientScopePersistenceFilter.java:2 8)
    at coldfusion.filter.BrowserFilter.invoke(BrowserFilter.java:38) at
    coldfusion.filter.NoCacheFilter.invoke(NoCacheFilter.java:46) at
    coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:38) at
    coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22)
    at coldfusion.CfmServlet.service(CfmServlet.java:175) at
    coldfusion.bootstrap.BootstrapServlet.service(BootstrapServlet.java:89)
    at jrun.servlet.FilterChain.doFilter(FilterChain.java:86) at
    coldfusion.monitor.event.MonitoringServletFilter.doFilter(MonitoringServletFilter.java:42 )
    at
    coldfusion.bootstrap.BootstrapFilter.doFilter(BootstrapFilter.java:46)
    at jrun.servlet.FilterChain.doFilter(FilterChain.java:94) at
    jrun.servlet.FilterChain.service(FilterChain.java:101) at
    jrun.servlet.ServletInvoker.invoke(ServletInvoker.java:106) at
    jrun.servlet.JRunInvokerChain.invokeNext(JRunInvokerChain.java:42)
    at
    jrun.servlet.JRunRequestDispatcher.invoke(JRunRequestDispatcher.java:284)
    at
    jrun.servlet.ServletEngineService.dispatch(ServletEngineService.java:543)
    at
    jrun.servlet.jrpp.JRunProxyService.invokeRunnable(JRunProxyService.java:203)
    at
    jrunx.scheduler.ThreadPool$DownstreamMetrics.invokeRunnable(ThreadPool.java:320)
    at
    jrunx.scheduler.ThreadPool$ThreadThrottle.invokeRunnable(ThreadPool.java:428)
    at
    jrunx.scheduler.ThreadPool$UpstreamMetrics.invokeRunnable(ThreadPool.java:266)
    at jrunx.scheduler.WorkerThread.run(WorkerThread.java:66)
    type: java.lang.IllegalStateException

  • How to recover a corrupted MSWord Document - corrupted graphics / corrupted tables

    Hello i am hoping to find someone who can guide me in recovering a word document for me... I was working on it had my auto save set to every 4 mins and all
    was well. closed ms word normally only to find the next time i open i get "a table in this document has been corrupted..." 
    Just to be clear i know all about word open and repair and recovering the last saved file etc. Those are not working. I get my file back but several graphics and
    tables are missing/messed up. 
    I am hoping that someone has some insight into this (maybe some document repair software that works well) and can help me restore the document or at least open
    it somehow with the original pics showing so that at least i know how to make changes to the new document. 
    Hope this makes sense. very urgent.

    
    If you're using Windows 7, the built-in shadow copy centre may have an older and uncorrupted version of your file.
    Launch Explorer, right-click the folder that contained the document and select Properties.
    If you see a Previous Versions tab then click that.
    If all is well then you'll see a list of entries for the folder, going back days, or maybe weeks.
    Double-click one with a date when you know the document was readable, and try opening this older version.
    Save it with a new name, and then repeat the process with later folder entries until you reach the point where the file became corrupted.
    If methods above didn’t work for you, you can use undelete tools like Any Data Recovery Pro (http://www.any-data-recovery.com/product/datarecoveryprofessional.htm)
    or Revuca (http://www.piriform.com/recuva/features) to help you find old Office temporary files. After that, rename the TMP extension to match the real document format, and then try opening it to see what you've recovered.

  • Corrupted table and associated mappings

    I believe I caused a problem by reimporting a table into the owb. All the mappings with the table became undeployable and any new mappings with table were undeployable and created hanging sessions when attempting to deploy. Export and Import also led to an error. Tried deleting table and importing again but have fear in my heart now.
    Cannot import from exported file. This is the error:
    MDL1407: Cannot import TABLE with business name <T_DETAILS> matching by object identifier because a TABLE <MER.MER.T_DETAILS> with the same physical name <T_DETAILS> but different object identifier already exists.
    Do i need to start wacking packages in my target schema or the owb schema?
    Thanks,

    Hi,
    The object which you are going to import having different object identifier.
    Try changing the parameters while Importing as follows.
    Import option as "Add new metadata and replace existing objects"
    Match By as "Physical Names"
    If the problem still exists, remove the object which you are going to import from OWB and try.
    If you feel problem, then take a backup snapshot for safer side.
    Hope this helps.
    Best Regards,
    Gowtham Sen.

  • The appearence of tables in some Word 2010 documents changes after KB2880529

    I wanted to alert you that, since our company has applied
    KB2880529, some users are reporting Word 2010 documents (docx) having their appearence changed.
    More precisely, the issue concern the table inserted in the Word document : They are all messed up. For exemple, the begining of the table can look ok, then a few lines of the table are
    badly miss aligned (like moved 2 cm to the right), then you've got a few normal lines, then again several bads, and so on.
    Also, some of the cells, that were in the last column of the table, may appear half outside of the table.
    And the worse part is that, even if you take some time to manualy fix the table, when you save the document and re-open it, everything is bad again, and exactly as it was before... so basicaly
    the save doesn't work for the tables (it work if you change some text in the table, but not for the table itself (size of the columns, location of the columns, so on)).
    I can't profite any file because they are of a very sensitive nature and can't leave our company, even if I remove most of the content, our IT security doesn't allow it.
    Anyway :
    - we are absolutly sure it's
    KB2880529 that does that because when we uninstall the KB, and re-open the document, it look normal again.
    - it seems to concern only documents that were created using some old Word 2003 templates some time ago, and then opened in Word 2010. As far as I know it doesn't happens on 100% Word 2010
    documents.
    So, we are currently doing a package installed by SCCM 2012 in order to uninstall it on all the PC which received it a few days ago.
    Let's hope you'll correct that issue... and if possible that, in a near futur, you'll had a feature in SCCM 2012 to allow us to unintall KBs, like it was possible with WSUS.

    I'm sure there are some ways to fix the files one by one. Indeed, our tests indicate that it's possible, for exemple, to open them in "LibreOffice 4.2" (a fork of OpenOffice), which displays them correctly, and then save the file in .doc and then use
    Office 2010 to open this .doc and save it in .docx.
    But what I've forget to indicate is that the issue touch probably thousands of documents that have been placed in a "document management application" (I'm not sure how to correctly translate) over the years, documents that may be hundreds of users need to
    look at, for reference, from time to time... but basicaly the documents are "frozen" / archived, they must not be edited by anybody.
    I very much doubt you can expect MS to not apply updates, including via the next Service Pack, just because some tables in some of your documents are corrupt. Besides which, the same problem will quite likely resurface when you next upgrade to a newer version
    of Office. Ultimately, someone is going to have to check out all the documents with tables and verify their content. The scope of that project might be narrowed down after you've checked a few documents and found some common features between those with the
    corrupt tables. It's easy enough to write a macro to test the files to see which ones have tables. That can serve as the first step in narrowing the scope of the project. It's also possible have the macro that repairs the files restore their
    original time/date stamps if that's important.
    An entirely different approach would be to temporarily uninstall the update on one PC. Then use that PC to convert all the documents to PDF. Then use the PDFs in the "document management application". Since the documents "must not be edited by anybody" the
    PDF format is inherently more secure in that regard and can have security attributes set to prevent printing and/or content copying. The PDF format is also impervious to Word's tendency to change document layouts whenever you do little things like updating
    printers or changing between doc & docx formats.
    Cheers
    Paul Edstein
    [MS MVP - Word]

  • Register Table in ABAP Dictionary

    One of our SAP RWD standard table got deleted during the upgrade. I have a backup of that table & created it using Create Table SQL commands directly at the database level.
    I did exp & imp of various tables too however in SE11 the tables created directly in database no matter Z tables or SAP Standard tables does not show up.
    My question is that how to register that table in ABAP dictionary?

    Table is a part of PSAPCRMUSR & does exists in schema SAPCRM.
    Let me give another example:
    One of SAP Table STERM_LINK got corrrupted and we did'nt knew about it for almost a month. OSS replied back saying that its a 3rd party table and we are not using any modules that would have updated/read that table. Also that table did'nt had any data.
    OSS suggested that either we exp/imp that table from QA to PRD or they can send us a script which is a normal create table....... script and we can drop can corrupted table.
    Now in this example if we drop that table and recreate it using script at database level, it will be unknown in dictionary.
    How to solve this kind of issue?

  • This entry already exists in the tables (OCRD - 2038) (131-138)

    I get this error without any add-on running directly in SAP trying to add a new Business Partner.
    We did reindex all the SQL table for a BP with no success.
    However we did upgrader this BD in a more recent environment and the problen went away.
    Is there any tools to rebuild the integrity of SAP database tables.  Something that could repair a corrupted table in SAP ?
    Whatever we add in BP it says the error.  Anything that doesn't really exist.
    it's weird.....

    What is your current B1 version?  What new version has you tested?
    These two threads somehow linked to CRD object:
    Error message when vendor select in Purchase Order
    Re: DI : Transform Lead to Customer

  • How I repaired a corrupted LR 2.4 catalog (MacOS)

    Hi Everyone,
    recently one of my catalogs got corrupted (the larger one, needless to say). I tried all the restoring procedures available but nothing worked. I could not simply go back to the latest backup since it  did not hold all the changes I made to an assignment I was working on. So I figured out that I could try and repair it myself. And I did. Here's what I did. Please note that this information is provided AS IS with no warranty of any kind. The procedure below has been done on a MacOS and I do not know if it works the same in Windows. If you attempt this yourself, you take full responsability.
    General knowledge of SQL and the workings of a DBMS are required.
    LR catalog is a SQLite 3 DB. Therefore you can use SQLite 3 and SQL to work with it. Here are the steps that I followed:
    1) Make a working copy of the corrupted catalog (here I named this TMPSRC.LRCAT). The procedure below is destructive and since there is no UNDO possible, I strongly suggest that you make copies of the DBs and move these to an empty working directory.
    2) Launch SQLITE3 (in a Terminal window) and download the DB schema via the .schema command (save it in a TXT file for reference, you'll need this badly)
    3) Make a working copy of the latest backp catalog as the basis for all the modifications (here I named this TMPDST.LRCAT)
    4) The logic is to use the backup catalog and copy in it all the information of the corrupted catalog eliminating those elements that are actually corrupted.
    5) In SQLITE3 attach the two DBs with this command:
         attach "./tmpsrc.lrcat" as src;
         attach "./tmpdst.lrcat" as dst;
    6) Now, following the DB schema (previously saved) issue for each TABLE in the schema an analyze command on the SRC DB, like this:
         analyze src.Adobe_libraryImageDevelopHistoryStep;
    7) Sooner or later you'll find the culprit corrupted table (TABLE)
    8) To find the record which is beginning of the corruption isue this command:
         select id_local from src.TABLE;
    9) To show the last record, pick up the last number (RECNUM) displayed and issue this command:
         select * from src.TABLE where id_local = RECNUM;
    10) You should get an error message; if so, repeat the procedure by selecting the previous record number until you find a one that does not give errors. This RECNUM will be WRECNUM.
    11) Now look in to the DST DB and check how many records are missing from that vary same table:
         select id_local from dst.TABLE;
    12) The logic now is to copy from the SRC all the records in TABLE up to the RECNUM that works (WRECNUM) into DST. In this process we'll overwrite the TABLE in DST.
         delete from dst.TABLE;
         insert into dst.TABLE select * from src.TABLE where id_local <= WRECNUM;     reindex dst.TABLE;
    13) The tricky part is that with a high probability the corrupted table is linked with some others that need to be updated by the same process as well. Therefore you need to go through the DB schema again and check each and every table both in DST and SRC to have the id_local aligned. If not, you shoud repeat the procedure above even if the table itself is not corrupted.
    The whole process took me about 3 hours on a 300MB catalog.
    HTH
    IAMLAPO

    Wow! Thanks for posting that! I hope it also inspires us all to make backups at the right time......

  • RMAN Skips the corrupted Block

    Hi,
    We are using Oracle 10.2.0.4 rman utility for backup, In SYSAUX tablespace, datafile 3, one block has been corrupted.
    I have try to export the tablespace , i found warning message of "corrupted block and table name". But RMAN, while try to backup datafile 3, or validate the backup, it doesn't popu any error message and went through with out any problem.
    ==>I have set maxcorrupt 0 in run block
    ==> Check logical i have used
    ==> V$database_block_corruption pops the result but MAXCORRUPT 0.
    ==> But Analyis the table show the corrupted block error.
    ===> Try to select * from corrupted table. it throws the error message.
    Please advice
    Regards
    Krish

    Steve,
    Thank you for the doc, But i have already mentioned,
    --> i have tried from check logical
    --> Also MaxCorrupt parameter
    Am able to view the corrutped block information in database_block_corruption
    but while backing with rman using
    RMAN> run
    2> {
    3> set MAXCORRUPT for datafile 3 to 0;
    4> backup datafile 3;
    5> }
    executing command: SET MAX CORRUPT
    using target database control file instead of recovery catalog
    Starting backup at 27-APR-09
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=12 instance=TEST devtype=DISK
    channel ORA_DISK_1: starting full datafile backupset
    channel ORA_DISK_1: specifying datafile(s) in backupset
    input datafile fno=00003 name=+DBFILE_GRP1/TEST/datafile/sysaux.499.667900479
    channel ORA_DISK_1: starting piece 1 at 27-APR-09
    channel ORA_DISK_1: finished piece 1 at 27-APR-09
    piece handle=/usr/oradata/BKUP/TEST/backup/database/BIRTEF1_4229_1_1_45kdh7uj tag=TAG
    comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
    Finished backup at 27-APR-09
    Starting Control File and SPFILE Autobackup at 27-APR-09
    Finished Control File and SPFILE Autobackup at 27-APR-09
    SQL> select * from v$database_block_corruption;
    FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTIO
    3 23260 1 0 CORRUPT
    3 123228 1 0 CORRUPT
    ITs backing up with no issues. Thats the confusion here.
    REgards
    KRishnan

  • Getting rid of blank table cells

    I am currently making an address book and have had to occasionally remove names from the listings. What this means is that there are now blank cells all through out my address book. How can I remove those cells?

    I am surprised that Word would be much smarter at doing this; InDesign has had the tables feature for qutie a while now I would have thought they would have worked this out by now. 
    <cough>
    Corrupt tables are pretty much the primary cause of corrupt Word documents. I've never seen a table cause a corrupt InDesign document. Word allows a lot of table-mangling, but I'd say that giving table-manglers what they want is the exact opposite of "smart."  Tables in Word are pure hell. InDesign's table implementation (purchased from a third party plugin developer, if I recall correctly), while not as... er... flexible as that of Word, is far more stable. I guess it depends on what you think is smart.
    Anyhow, back to what you want:
    The reason I designed it using tables like this is because this is the way that my client said they wanted it to be.
    Well, I hope you bill hourly.   There is a way to automate this, I think, using InDesign's Data Merge feature. If you aren't billing hourly, then it may be worth setting up - moving all of the content into an application like Excel, a spreadsheet app where it's really easy to do address-book-entry-management the way you want, and does not induce corruption when you have to spot-delete individual cells, unlike some other apps I can think of. You can then save a file out of Excel, and InDesign can automagically fill up your tables with content from your Excel file, with no gaps. When your client wants changes, then you just tweak your Excel file a bit, export, and then run the Data Merge to get your gap-free table back.
    However, that'd be a lot of work, and I can't even guarantee that it would work, never having tried it. But it's where I'd go in your shoes if my dataset wasn't this small:
    my list of names is relatively short so cutting and pasting shouldn't take too much time.
    Lastly, I think that your client might not care about whether or not they're in table cells; they just want it to look that way, right? If so, there are many, many ways to set up your doc that don't involve tables, that still yield the desired appearance. If you want to go in that direction, I can toss out a few ideas - I think that Rik already has, although his suggestion is a little bit low on detail.

  • InDesign CS5.5 to CS6 bug involving Tables

    Hello
    I am having issues when editing files that were created in InDesign CS5.5 with CS6, most specifically with Tables.
    When I open a file created in CS5.5 with CS6 some of the tables are corrupted. When I try to edit the content in the corrupt tables or even resize the text frame, the table and all content below become overset. Even adding a single character causes overset. (the text frame and table cells are large enough to accommodate the changes)
    I can edit the file in CS5.5 without any issues.
    Saving the file as .idml doesn't fix the problem. When I do this and open the .idml in CS6 the corrupted tables have already shifted and caused an overset before I make any edits.  
    I've also tried:
    - deleting all paragraph, character, and table styles
    - pasting the content into a new file
    The only way that fixed the problem was merging the table rows, but with the content we have this is definitely not ideal. The file is a multi page application form and we have more than one hundred similar files.
    Any guidance on this would be greatly appreciated.
    -kvan

    With every version there are changes to the text engine that affect composition. When you open a legacy file the changes are not made until you touch the text in some way. When you open a .idml file the changes are made at opening.

  • Data corrupt block

    os Sun 5.10 oracle version 10.2.0.2 RAC 2 node
    alert.log 내용
    Hex dump of (file 206, block 393208) in trace file /oracle/app/oracle/admin/DBPGIC/udump/dbpgic1_ora_1424.trc
    Corrupt block relative dba: 0x3385fff8 (file 206, block 393208)
    Bad header found during backing up datafile
    Data in bad block:
    type: 32 format: 0 rdba: 0x00000001
    last change scn: 0x0000.98b00394 seq: 0x0 flg: 0x00
    spare1: 0x1 spare2: 0x27 spare3: 0x2
    consistency value in tail: 0x00000001
    check value in block header: 0x0
    block checksum disabled
    Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
    Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
    Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
    Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
    Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
    corrupt 발생한 Block id 를 검색해 보면 Block id 가 검색이 안됩니다.
    dba_extents 로 검색
    corrupt 때문에 Block id 가 검색이 안되는 것인지 궁금합니다.
    export 받으면 데이타는 정상적으로 export 가능.

    다행이네요. block corruption 이 발생한 곳이 데이터가 저장된 블록이
    아닌 것 같습니다. 그것도 rman백업을 통해서 발견한 것 같는데
    맞는지요?
    scn이 scn: 0x0000.00000000 가 아닌
    0x0000.98b00394 인 것으로 봐서는 physical corrupt 가 아닌
    soft corrupt인 것 같습니다.
    그렇다면 버그일 가능성이 높아서 찾아보니
    Bug 4411228 - Block corruption with mixture of file system and RAW files
    의 버그가 발견되었습니다. 이것이 아닐 수도 있지만..
    이러한 block corruption에 대한 처리방법 및 원인분석은
    오라클(주)를 통해서 정식으로 요청하셔야 합니다.
    metalink를 통해서 SR 요청을 하십시오.
    export는 high water mark 이후의 block corruption을 찾아내지 못하고 이외에도
    아래 몇가지 경우에서도 찾아내지 못합니다.
    db verify( dbv)의 경우에는 physical corruption은 찾아내지 못하고
    soft block corruption만 찾아낼 수 있습니다.
    경험상 physical corruption 이 발생하였으나 /dev/null로
    datafile copy가 안되는데도 dbv로는 이 문제를 찾아내지
    못하였습니다.
    그렇다면 가장 좋은 방법은 rman 입니다. rman은 high water mark까지의
    데이터를 백업해주면서 전체 데이터파일에 대한 체크를 하기도 합니다.
    physical corruption뿐만 아니라 logical corruption도 체크를
    하니 점검하기로는 rman이 가장 좋은 방법이라 생각합니다.
    The Export Utility
    # Use a full export to check database consistency
    # Export performs a full scan for all tables
    # Export only reads:
    - User data below the high-water mark
    - Parts of the data dictionary, while looking up information concerning the objects being exported
    # Export does not detect the following:
    - Disk corruptions above the high-water mark
    - Index corruptions
    - Free or temporary extent corruptions
    - Column data corruption (like invalid date values)
    block corruption을 정상적으로 복구하는 방법은 restore 후에
    복구하는 방법이 있겠으나 이미 restore할 백업이 block corruption이
    발생했을 수도 있습니다. 그러므로 다른 서버에 restore해보고
    정상적인 datafile인 것을 확인 후에 실환경에 restore하는 것이 좋습니다.
    만약 백업본까지 block corruption이 발생하였거나 또는 시간적 여유가
    없을 경우에는 table을 move tablespace 또는 index rebuild를 통해서
    다른 테이블스페이스로 데이터를 옮겨두고 문제가 발생한 테이블스페이스를
    drop해버리고 재생성 하는 것이 좋을 것 같습니다.(지금 현재 데이터의
    손실은 없으니 move tablespace, rebuild index 방법이 좋겠습니다.
    Handling Corruptions
    Check the alert file and system log file
    Use diagnostic tools to determine the type of corruption
    Dump blocks to find out what is wrong
    Determine whether the error persists by running checks multiple times
    Recover data from the corrupted object if necessary
    Preferred resolution method: media recovery
    Handling Corruptions
    Always try to find out if the error is permanent. Run the analyze command multiple times or, if possible, perform a shutdown and a startup and try again to perform the operation that failed earlier.
    Find out whether there are more corruptions. If you encounter one, there may be other corrupted blocks, as well. Use tools like DBVERIFY for this.
    Before you try to salvage the data, perform a block dump as evidence to identify the actual cause of the corruption.
    Make a hex dump of the bad block, using UNIX dd and od -x.
    Consider performing a redo log dump to check all the changes that were made to the block so that you can discover when the corruption occurred.
    Note: Remember that when you have a block corruption, performing media recovery is the recommended process after the hardware is verified.
    Resolve any hardware issues:
    - Memory boards
    - Disk controllers
    - Disks
    Recover or restore data from the corrupt object if necessary
    Handling Corruptions (continued)
    There is no point in continuing to work if there are hardware failures. When you encounter hardware problems, the vendor should be contacted and the machine should be checked and fixed before continuing. A full hardware diagnostics should be run.
    Many types of hardware failures are possible:
    Bad I/O hardware or firmware
    Operating system I/O or caching problem
    Memory or paging problems
    Disk repair utilities
    아래 관련 자료를 드립니다.
    All About Data Blocks Corruption in Oracle
    Vijaya R. Dumpa
    Data Block Overview:
    Oracle allocates logical database space for all data in a database. The units of database space allocation are data blocks (also called logical blocks, Oracle blocks, or pages), extents, and segments. The next level of logical database space is an extent. An extent is a specific number of contiguous data blocks allocated for storing a specific type of information. The level of logical database storage above an extent is called a segment. The high water mark is the boundary between used and unused space in a segment.
    The header contains general block information, such as the block address and the type of segment (for example, data, index, or rollback).
    Table Directory, this portion of the data block contains information about the table having rows in this block.
    Row Directory, this portion of the data block contains information about the actual rows in the block (including addresses for each row piece in the row data area).
    Free space is allocated for insertion of new rows and for updates to rows that require additional space.
    Row data, this portion of the data block contains rows in this block.
    Analyze the Table structure to identify block corruption:
    By analyzing the table structure and its associated objects, you can perform a detailed check of data blocks to identify block corruption:
    SQL> analyze table_name/index_name/cluster_name ... validate structure cascade;
    Detecting data block corruption using the DBVERIFY Utility:
    DBVERIFY is an external command-line utility that performs a physical data structure integrity check on an offline database. It can be used against backup files and online files. Integrity checks are significantly faster if you run against an offline database.
    Restrictions:
    DBVERIFY checks are limited to cache-managed blocks. It’s only for use with datafiles, it will not work against control files or redo logs.
    The following example is sample output of verification for the data file system_ts_01.dbf. And its Start block is 9 and end block is 25. Blocksize parameter is required only if the file to be verified has a non-2kb block size. Logfile parameter specifies the file to which logging information should be written. The feedback parameter has been given the value 2 to display one dot on the screen for every 2 blocks processed.
    $ dbv file=system_ts_01.dbf start=9 end=25 blocksize=16384 logfile=dbvsys_ts.log feedback=2
    DBVERIFY: Release 8.1.7.3.0 - Production on Fri Sep 13 14:11:52 2002
    (c) Copyright 2000 Oracle Corporation. All rights reserved.
    Output:
    $ pg dbvsys_ts.log
    DBVERIFY: Release 8.1.7.3.0 - Production on Fri Sep 13 14:11:52 2002
    (c) Copyright 2000 Oracle Corporation. All rights reserved.
    DBVERIFY - Verification starting : FILE = system_ts_01.dbf
    DBVERIFY - Verification complete
    Total Pages Examined : 17
    Total Pages Processed (Data) : 10
    Total Pages Failing (Data) : 0
    Total Pages Processed (Index) : 2
    Total Pages Failing (Index) : 0
    Total Pages Processed (Other) : 5
    Total Pages Empty : 0
    Total Pages Marked Corrupt : 0
    Total Pages Influx : 0
    Detecting and reporting data block corruption using the DBMS_REPAIR package:
    Note: Note that this event can only be used if the block "wrapper" is marked corrupt.
    Eg: If the block reports ORA-1578.
    1. Create DBMS_REPAIR administration tables:
    To Create Repair tables, run the below package.
    SQL> EXEC DBMS_REPAIR.ADMIN_TABLES(‘REPAIR_ADMIN’, 1,1, ‘REPAIR_TS’);
    Note that table names prefix with ‘REPAIR_’ or ‘ORPAN_’. If the second variable is 1, it will create ‘REAIR_key tables, if it is 2, then it will create ‘ORPAN_key tables.
    If the thread variable is
    1 then package performs ‘create’ operations.
    2 then package performs ‘delete’ operations.
    3 then package performs ‘drop’ operations.
    2. Scanning a specific table or Index using the DBMS_REPAIR.CHECK_OBJECT procedure:
    In the following example we check the table employee for possible corruption’s that belongs to the schema TEST. Let’s assume that we have created our administration tables called REPAIR_ADMIN in schema SYS.
    To check the table block corruption use the following procedure:
    SQL> VARIABLE A NUMBER;
    SQL> EXEC DBMS_REPAIR.CHECK_OBJECT (‘TEST’,’EMP’, NULL,
    1,’REPAIR_ADMIN’, NULL, NULL, NULL, NULL,:A);
    SQL> PRINT A;
    To check which block is corrupted, check in the REPAIR_ADMIN table.
    SQL> SELECT * FROM REPAIR_ADMIN;
    3. Fixing corrupt block using the DBMS_REPAIR.FIX_CORRUPT_BLOCK procedure:
    SQL> VARIABLE A NUMBER;
    SQL> EXEC DBMS_REPAIR.FIX.CORRUPT_BLOCKS (‘TEST’,’EMP’, NULL,
    1,’REPARI_ADMIN’, NULL,:A);
    SQL> SELECT MARKED FROM REPAIR_ADMIN;
    If u select the EMP table now you still get the error ORA-1578.
    4. Skipping corrupt blocks using the DBMS_REPAIR. SKIP_CORRUPT_BLOCK procedure:
    SQL> EXEC DBMS_REPAIR. SKIP_CORRUPT.BLOCKS (‘TEST’, ‘EMP’, 1,1);
    Notice the verification of running the DBMS_REPAIR tool. You have lost some of data. One main advantage of this tool is that you can retrieve the data past the corrupted block. However we have lost some data in the table.
    5. This procedure is useful in identifying orphan keys in indexes that are pointing to corrupt rows of the table:
    SQL> EXEC DBMS_REPAIR. DUMP ORPHAN_KEYS (‘TEST’,’IDX_EMP’, NULL,
    2, ‘REPAIR_ADMIN’, ‘ORPHAN_ADMIN’, NULL,:A);
    If u see any records in ORPHAN_ADMIN table you have to drop and re-create the index to avoid any inconsistencies in your queries.
    6. The last thing you need to do while using the DBMS_REPAIR package is to run the DBMS_REPAIR.REBUILD_FREELISTS procedure to reinitialize the free list details in the data dictionary views.
    SQL> EXEC DBMS_REPAIR.REBUILD_FREELISTS (‘TEST’,’EMP’, NULL, 1);
    NOTE
    Setting events 10210, 10211, 10212, and 10225 can be done by adding the following line for each event in the init.ora file:
    Event = "event_number trace name errorstack forever, level 10"
    When event 10210 is set, the data blocks are checked for corruption by checking their integrity. Data blocks that don't match the format are marked as soft corrupt.
    When event 10211 is set, the index blocks are checked for corruption by checking their integrity. Index blocks that don't match the format are marked as soft corrupt.
    When event 10212 is set, the cluster blocks are checked for corruption by checking their integrity. Cluster blocks that don't match the format are marked as soft corrupt.
    When event 10225 is set, the fet$ and uset$ dictionary tables are checked for corruption by checking their integrity. Blocks that don't match the format are marked as soft corrupt.
    Set event 10231 in the init.ora file to cause Oracle to skip software- and media-corrupted blocks when performing full table scans:
    Event="10231 trace name context forever, level 10"
    Set event 10233 in the init.ora file to cause Oracle to skip software- and media-corrupted blocks when performing index range scans:
    Event="10233 trace name context forever, level 10"
    To dump the Oracle block you can use below command from 8.x on words:
    SQL> ALTER SYSTEM DUMP DATAFILE 11 block 9;
    This command dumps datablock 9 in datafile11, into USER_DUMP_DEST directory.
    Dumping Redo Logs file blocks:
    SQL> ALTER SYSTEM DUMP LOGFILE ‘/usr/oracle8/product/admin/udump/rl. log’;
    Rollback segments block corruption, it will cause problems (ORA-1578) while starting up the database.
    With support of oracle, can use below under source parameter to startup the database.
    CORRUPTEDROLLBACK_SEGMENTS=(RBS_1, RBS_2)
    DB_BLOCK_COMPUTE_CHECKSUM
    This parameter is normally used to debug corruption’s that happen on disk.
    The following V$ views contain information about blocks marked logically corrupt:
    V$ BACKUP_CORRUPTION, V$COPY_CORRUPTION
    When this parameter is set, while reading a block from disk to catch, oracle will compute the checksum again and compares it with the value that is in the block.
    If they differ, it indicates that the block is corrupted on disk. Oracle makes the block as corrupt and signals an error. There is an overhead involved in setting this parameter.
    DB_BLOCK_CACHE_PROTECT=‘TRUE’
    Oracle will catch stray writes made by processes in the buffer catch.
    Oracle 9i new RMAN futures:
    Obtain the datafile numbers and block numbers for the corrupted blocks. Typically, you obtain this output from the standard output, the alert.log, trace files, or a media management interface. For example, you may see the following in a trace file:
    ORA-01578: ORACLE data block corrupted (file # 9, block # 13)
    ORA-01110: data file 9: '/oracle/dbs/tbs_91.f'
    ORA-01578: ORACLE data block corrupted (file # 2, block # 19)
    ORA-01110: data file 2: '/oracle/dbs/tbs_21.f'
    $rman target =rman/rman@rmanprod
    RMAN> run {
    2> allocate channel ch1 type disk;
    3> blockrecover datafile 9 block 13 datafile 2 block 19;
    4> }
    Recovering Data blocks Using Selected Backups:
    # restore from backupset
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 FROM BACKUPSET;
    # restore from datafile image copy
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 FROM DATAFILECOPY;
    # restore from backupset with tag "mondayAM"
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 199 FROM TAG = mondayAM;
    # restore using backups made before one week ago
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 RESTORE
    UNTIL 'SYSDATE-7';
    # restore using backups made before SCN 100
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 RESTORE UNTIL SCN 100;
    # restore using backups made before log sequence 7024
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 RESTORE
    UNTIL SEQUENCE 7024;
    글 수정:
    Min Angel (Yeon Hong Min, Korean)

  • Remove Corrupted Safe Sender

    When using powershell (or OWA) to turn on junk email filtering I get the error:
    [PS] C:\scripts>Set-MailboxJunkEmailConfiguration "first last" -enabled $true
    Junk e-mail validation error. Value: [email protected]
    I can't see this entry displayed in any senders list in Outlook or OWA (Safe Senders, Blocked Senders etc) so apparently it is a corrupted table entry somewhere.
    Any ideas how to find and remove the entry from the list?
    Thanks
    -pete

    I would try moving the mailbox to another database ( that clean things up often)
    or use mfcmapi and whack the junk mail rules.
    http://support.microsoft.com/kb/2860406
    Twitter!:
    Please Note: My Posts are provided “AS IS” without warranty of any kind, either expressed or implied.
    Thanks.  I thought about moving the mailbox to see if that would work.  I'll try that first.  Thanks for the link to the MFCMape process.  I was looking for that. 

Maybe you are looking for

  • Photo albums not displayed when viewing my website

    I have a website for my band www.lastarrow.combut the photos page isn't displaying properly. It should look like this: http://i68.photobucket.com/albums/i27/eastwoodandy/Picture5.png But it is displaying like this: http://i68.photobucket.com/albums/i

  • ABAP Run time error in SD

    Hi I am Belongs to SD. In my home PC i am getting following ABAP Runtime error. Plaese help me ABAP Guys. <b>DBIF_RSQL_INVALID_REQUEST</b> This error i am getting in most transanctions. Please give me the solution for this. Regards, Ravi Chandra

  • Saving table and waveform data

    Hello, what i have is a table and digital waveform graph.  what i need to do is save the data and send it to a database in xml format and then when i load the data it will populate the table and graph again.  can someone give advice or an example wou

  • Configuring bgp route preference

    I have a situation where an outside vendor is hosting some Oracle servers for my company.  I have routers at 2 of their data centers, one west coast and one east coast.  The Oracle servers are hosted at their east coast data center.  The connections

  • Starting dbconsole

    does anyone has an idea why dbconsole stop functioning?? It is stop working even if used emctl stop dbconsole and restart it again what might be the prb and is it a defect in oracle 10g??