A question about cache group error in TimesTen 7.0.5

hello, chris:
we got some errors about cache group :
2008-09-21 08:56:15.99 Err : ORA: 229574: ora-229574-3085-ogTblGC00405: Failed calling OCI function: OCIStmtFetch()
2008-09-21 08:56:15.99 Err : ORA: 229574: ora-229574-3085-raUtils00373: Oracle native error code = 1405, msg = ORA-01405: fetched column value is NULL
2008-09-21 08:56:28.16 Err : ORA: 229576: ora-229576-2057-raStuff09837: Unexpected row count. Expecting 1. Got 0.
and the exact scene is: our oracle server was restart for some reason, but we didnot restart the cache group agent. then iit start appear those errors informations.
we want to know, if the oracle server restart, whether we need to restart cache agent?? thank you..

Yes, the tracking table will track all changes to the associated base table. Only changes that meet the cache group WHERE clause predicate will be refreshed to TimesTen.
The tracking table is managed automatically by the cache agent. As long as the cache agent is running and AUTOREFRESH is occurring the table will be space managed and old data will be purged.
It is okay if very occasionally an AUTOREFRESH is unable to complete within its defined interval but if this happens with any regularity then this is a problem since this situation is unsustainable. To remedy this you need to try one or more of:
1. Tune execution of AUTOREFRESH queries in Oracle. This may mean adding additional indexes to some of the cached Oracle tables. There is an article on this in MetaLink (doc note 473493.1).
2. Increase the AUTOREFRESH interval so that a refresh can always complete within the defined interval.
In any event it is important that you have enough space to cope with the 'steady state' size of the tracking table. If the cache agent will not be running for any significant length of time you need to manually cleanup the tracking table. In TimesTen 11g a script to do this is provided but it is not officially supported in TimesTen 7.0.
If the rate of updates on the base table is such that you cannot arrive at a sustainable situation by tuning etc. then you will need to consider more radical options such as breaking the table into multiple separate tables :-(
Chris

Similar Messages

  • Question about Everyone Group in SharePoint 2013

    Hi,
    I have couple of question about EVERYONE group below,
             - As per the best practice which Group we should use instead of EVERYONE group in Sharepoint ?
             - What is the difference between Everyone and All Authenticated Users Group
    We have added Everyone Group in different sites, now the question is if we hide this group showing up in sharepoint people picker, is there any impact interms of current site?
             - Is there any way we can hide Everyone group showing up in the people picker only for the site / Site Collection level.
    Please help.
    Thanks
    srabon

    There is no functional difference between the Everyone group and All Authenticated Users (after Active Directory has been upgraded to Server 2003 native schema).
    I'm not aware of any function to hide the group from the People Picker.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • About cache group

    一个程序向TimesTen的数据表中插入数据能正常运行。但这个表和ORACLE做Cache Group时就不行。
    I have a wired problem: a program can insers data into a table of TimesTen when there is no Cache Group with oracle.
    However, it can not do this while it is connected to oracle using cache group. Any idea why this happens?
    error message:
    *** ERROR in tt_main.c, line 90:
    *** [TimesTen][TimesTen 7.0.3.0.0 ODBC Driver][TimesTen]TT5102: Cannot load backend library 'libclntsh.so' for Cache Connect.
    OS error message 'ld.so.1: test_C: ???: libclntsh.so: ????: ???????'. -- file "bdbOciFuncs.c", lineno 257,
    procedure "loadSharedLibrary()"
    *** ODBC Error/Warning = S1000, Additional Error/Warning = 5102

    I think I can exculde the above possibilities, as I have checked all the settings above.
    We could use SQL statements as input, and inserting and query can be done at both ends.
    It is only the program that does not work. My "connection string" is the following:
    connstr=====DSN=UTEL7;UID=utel7;PWD=utel7;AutoCreate=0;OverWrite=0;Authenticate=1
    Maybe it is a mistaken properity, or permission, or a switch parameter? Please give some suggestions.
    Thank you very much.
    Create cache group command is:
    Create Asynchronous Writethrough Cache Group utel7_load
    From
    utel7.load(col0 binary_float, col1 binary_float ......
    My odbc.ini is the following:
    # Copyright (C) 1999, 2007, Oracle. All rights reserved.
    # The following are the default values for connection attributes.
    # In the Data Sources defined below, if the attribute is not explicitly
    # set in its entry, TimesTen 7.0 uses the defaults as
    # specified below. For more information on these connection attributes,
    # see the accompanying documentation.
    # Lines in this file beginning with # or ; are treated as comments.
    # In attribute=_value_ lines, the value consists of everything
    # after the = to the end of the line, with leading and trailing white
    # space removed.
    # Authenticate=1 (client/server only)
    # AutoCreate=1
    # CkptFrequency (if Logging == 1 then 600 else 0)
    # CkptLogVolume=0
    # CkptRate=0 (0 = rate not limited)
    # ConnectionCharacterSet (if DatabaseCharacterSet == TIMESTEN8
    # then TIMESTEN8 else US7ASCII)
    # ConnectionName (process argv[0])
    # Connections=64
    # DatabaseCharacterSet (no default)
    # Diagnostics=1
    # DurableCommits=0
    # ForceConnect=0
    # GroupRestrict (none by default)
    # Isolation=1 (1 = read-committed)
    # LockLevel=0 (0 = row-level locking)
    # LockWait=10 (seconds)
    # Logging=1 (1 = write log to disk)
    # LogAutoTruncate=1
    # LogBuffSize=65536 (measured in KB)
    # LogDir (same as checkpoint directory by default)
    # LogFileSize=64 (measured in MB)
    # LogFlushMethod=0
    # LogPurge=1
    # MatchLogOpts=0
    # MemoryLock=0 (HP-UX, Linux, and Solaris platforms only)
    # NLS_LENGTH_SEMANTICS=BYTE
    # NLS_NCHAR_CONV_EXCP=0
    # NLS_SORT=BINARY
    # OverWrite=0
    # PermSize=2 (measured in MB; default is 2 on 32-bit, 4 on 64-bit)
    # PermWarnThreshold=90
    # Preallocate=0
    # PrivateCommands=0
    # PWD (no default)
    # PWDCrypt (no default)
    # RecoveryThreads=1
    # SQLQueryTimeout=0 (seconds)
    # Temporary=0 (data store is permanent by default)
    # TempSize (measured in MB; default is derived from PermSize,
    # but is always at least 6MB)
    # TempWarnThreshold=90
    # TypeMode=0 (0 = Oracle types)
    # UID (operating system user ID)
    # WaitForConnect=1
    # Oracle Loading Attributes
    # OracleID (no default)
    # OraclePWD (no default)
    # PassThrough=0 (0 = SQL not passed through to Oracle)
    # RACCallback=1
    # TransparentLoad=0 (0 = do not load data)
    # Client Connection Attributes
    # ConnectionCharacterSet (if DatabaseCharacterSet == TIMESTEN8
    # then TIMESTEN8 else US7ASCII)
    # ConnectionName (process argv[0])
    # PWD (no default)
    # PWDCrypt (no default)
    # TTC_Server (no default)
    # TTC_Server_DSN (no default)
    # TTC_Timeout=60
    # UID (operating system user ID)
    [ODBC Data Sources]
    TT_tt70=TimesTen 7.0 Driver
    TpcbData_tt70=TimesTen 7.0 Driver
    TptbmDataRepSrc_tt70=TimesTen 7.0 Driver
    TptbmDataRepDst_tt70=TimesTen 7.0 Driver
    TptbmData_tt70=TimesTen 7.0 Driver
    BulkInsData_tt70=TimesTen 7.0 Driver
    WiscData_tt70=TimesTen 7.0 Driver
    RunData_tt70=TimesTen 7.0 Driver
    CacheData_tt70=TimesTen 7.0 Driver
    Utel7=TimesTen 7.0 Driver
    TpcbDataCS_tt70=TimesTen 7.0 Client Driver
    TptbmDataCS_tt70=TimesTen 7.0 Client Driver
    BulkInsDataCS_tt70=TimesTen 7.0 Client Driver
    WiscDataCS_tt70=TimesTen 7.0 Client Driver
    RunDataCS_tt70=TimesTen 7.0 Client Driver
    # Instance-Specific System Data Store
    # A predefined instance-specific data store reserved for system use.
    # It provides a well-known data store for use when a connection
    # is required to execute commands.
    [TT_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/TT_tt70
    DatabaseCharacterSet=US7ASCII
    # Data source for TPCB
    # This data store is created on connect; if it doesn't already exist.
    # (AutoCreate=1 and Overwrite=0). For performance reasons, database-
    # level locking is used. However, logging is turned on. The initial
    # size is set to 16MB.
    [TpcbData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/TpcbData
    DatabaseCharacterSet=US7ASCII
    PermSize=16
    WaitForConnect=0
    Authenticate=0
    # Data source for TPTBM demo
    # This data store is created everytime the benchmark is run.
    # Overwrite should always be 0 for this benchmark. All other
    # attributes may be varied and performance under those conditions
    # evaluated. The initial size is set to 20MB and durable commits are
    # turned off.
    [TptbmData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/TptbmData
    DatabaseCharacterSet=US7ASCII
    PermSize=20
    Overwrite=0
    Authenticate=0
    # Source data source for TPTBM demo in replication mode
    # This data store is created everytime the replication benchmark demo
    # is run. This datastore is set up for the source data store.
    [TptbmDataRepSrc_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/TptbmDataRepSrc_tt70
    DatabaseCharacterSet=US7ASCII
    PermSize=20
    Overwrite=0
    Authenticate=0
    # Destination data source for TPTBM demo in replication mode
    # This data store is created everytime the replication benchmark demo
    # is run. This datastore is set up for the destination data store.
    [TptbmDataRepDst_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/TptbmDataRepDst_tt70
    DatabaseCharacterSet=US7ASCII
    PermSize=20
    Overwrite=0
    Authenticate=0
    # Data source for BULKINSERT demo
    # This data store is created on connect; if it doesn't already exist
    # (AutoCreate=1 and Overwrite=0).
    [BulkInsData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/BulkInsData
    DatabaseCharacterSet=US7ASCII
    LockLevel=1
    PermSize=32
    WaitForConnect=0
    Authenticate=0
    # Data source for WISCBM demo
    # This data store is created on connect if it doesn't already exist
    # (AutoCreate=1 and Overwrite=0). For performance reasons,
    # database-level locking is used. However, logging is turned on.
    [WiscData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/WiscData
    DatabaseCharacterSet=US7ASCII
    LockLevel=1
    PermSize=16
    WaitForConnect=0
    Authenticate=0
    # Default Data source for TTISQL demo and utility
    # Use default options.
    [RunData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/RunData
    DatabaseCharacterSet=US7ASCII
    Authenticate=0
    # Sample Data source for the xlaSimple demo
    # see manual for discussion of this demo
    [Sample_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/Sample
    DatabaseCharacterSet=US7ASCII
    TempSize=16
    PermSize=16
    Authenticate=0
    # Sample data source using OracleId.
    [CacheData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/CacheData
    DatabaseCharacterSet=US7ASCII
    OracleId=MyData
    PermSize=16
    # New data source definitions can be added below. Here is my datastore!!!
    [Utel7]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/tt70_data/utel7
    DatabaseCharacterSet=ZHS16GBK
    Uid=utel7
    Authenticate=0
    OracleID=db3
    OraclePWD=utel7
    PermSize=6000
    Connections=20
    #permsize*20%
    TempSize=400
    CkptFrequency=600
    CkptLogVolume=256
    LogBuffSize=256000
    LogFileSize=256

  • Question about split group by in IQ16 SP08.

    Dear all,
    I have a customer who encountered performance of "union all view".
    As per our analysis, IQ optimizer does't split the Group By.
    [Query]
    select "EDPS_CSN","count"()
      from "adwown"."vw_adw_dpy111n_01"
      group by "edps_csn"
    As far as I know, There are some restrictions on the situations and queries that benefit from the split GROUP BY.
    But this view meets all restrictions.
    Please refer to the below URL.
    [Impact on query performance of GROUP BY over a UNION ALL]
    - http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc00169.1520/html/iqperf/iqperf35.htm
      So I would like to know the following questions.
      1) How to enforce the split the group by.
      2) Any changes about split group by in IQ16 SP08?
      I failed to attach the query pland becuase it's extension is html.
      Any comments or advice will be greatly apprecaited.
      ** Base Table Rows
        1) ADWOWN.TB_ADW_DPY119N : 23,259
        2) ADWOWN.TB_ADW_DPY111N : 398,017,348
        3) ADWOWN.TB_ADW_DPY117N : 16,160,487
    Thanks
    Gi-Sung Jang

    The big issue is that the GROUP BY is on the view, not on the base tables.  At the time of optimization, we don't know always know the data distribution of the GROUP BY key.  At least we don't know whether or not each table in the UNION ALL has overlapping data for that column.  Consequently, you have to retrieve all data first, sort/order it, then run the group by.
    There are caveats to this, as your link provides.  But what is missing from your post is the logic of the view.  Can you provide that?
    Mark

  • A qustion about cache group

    hello, chris:
    now we have a situation: we have cache group and replications, but the rep schemes doesn't include the cache groups. when we modify the oraclePwd values, we need recall "call ttcacheuidpwdset(***,***);", then it will appear the errors " The operation cannot be executed while the Replication Agent for this datastore is running.", how can we avoid this situation, we don't want to restart the rep agent, because when the restart process it will appear some timeout in application. thank you...
    the cache group type is readonly cache group
    Edited by: user578558 on 2009-1-15 下午7:42

    There is no way to avoid this situation. Many operations, including setting the cache userid/password, require that the replication agent be stopped while they are executed. This should only be an issue for the application if you are using RETURN RECEIPT or RETURN TWOSAFE replication. In that case when the repagent is stopped the application may receive a return service timeout warning (8170). However, the impact of this can be minimised by ensuring that your replication configuration includes appropriate STORE clauses wtih RETURN SERVICES OFF WHEN REPLICATION STOPPED. However, even with this clause the application may receive one warning when the repagent is stopped. Applications that use RETURN RECEIPT or RETURN TWOSAFE must be coded to expect 8170 warnings and to react accordingly.
    Chris

  • Two questions about Logon Group

    About logon group, it describes as below in the help.sap.com.
    1. Each SAP application has different resource requirements. Certain applications may therefore require more servers and logon groups. For example, you should assign separate servers for the application component PP.
    Q1: How a certain application use more than one servers via logon group, and how it use sap memory which resides in different servers?
    2. If it is not practical for you to assign separate servers to integrated applications, such as the application components SD-MM and FI-CO, you should assign common logon groups to these applications.
    Q2: I don't understand this sentence exactly.
    Thanks so much.
    James

    Not sure if I exactly understood what your problem is, but let me give it a try:
    A1: One logon group may have several servers attached to it. If users user1, user2, user3, ... are going to connect to the logon group, they will be sent to different servers. None of those users will be able to use memory from more than one server. But, let's say user1 and user2 will use resources from server1, whereas user3 will use resources from server2. The goal is that all servers will have the same (or similar) load, just by distributing users.
    A2: If it is not possible to have four logon groups for the four applications SD, MM, FI, CO, but you still want different logon groups, then, at least, you should create two logon groups, one for SD and MM, the other one for FI and CO. That's because resource requirements are similar for SD and MM, and for FI and CO.
    hope this helps

  • The Question about mess "Customizing Error in Work Schedule Rule ..."

    Dear all,
    I run the Start-Payroll  and I getting an error " Customizing Error In WSR for that Personal Number"
    I configuration all of Time Management with start date is 01.01.2000 and when I run Payroll on Payroll period is 01/2000 with start date is 01.01.2000 to 31.01.2000 for Employees hiring on 01.01.2000 then I have an error " Customizing error in work schedule rule .. .. ... ..".
    But I can run the payroll period on 02/2000 with start date is 01.02.2000 to 29.02.2000 for Employees hiring on 01.02.2000.  form 02/2000 it is okies.
    Please help me  solve about my problem
    Thank for your answers
    Regds
    Huyen Nguyen

    Customizing Error in Work Schedule Rule
    where are u getting this error
    if it  is PY  log
    Check GENPS this error will come under this Function
    it is the combination of
    ur Holiday calendar
    Employee Sub Group
    Personal Sub area Grouping
    and ur Daily work Schedule Rule
    Check the Start dats an end dates fo all the above Settings along with their Groupings in table V_T508A

  • Question about cache of sequence

    desc table temp1
    id number(p.k)
    comp number(5)
    i have a trigger call BI_TEMP1
    Trigger Type BEFORE EACH ROW
    Triggering Event INSERT
    begin
    for c1 in (
    select TEMP1_SEQ.nextval next_val
    from dual
    ) loop
    :new.ID := c1.next_val;
    end loop;
    end;
    the sequence define as
    Min Value 1
    Max Value 999999999999999999999999999
    Increment By 1
    i have program which do insert to the table of 14 rows.
    after the program finished i did :
    select max(id) from temp1;
    14
    when i run the program again to insert those 14 rows again the sequence start from number 21
    instead of 15
    now i know that this is because of the cache tha equals 20.
    my question is shall i use cache or not ?
    i read about the cache option and i did not get the advantages of it
    in which cases it is better to me to use cache and in which one is not?
    thanks in advance

    First of all, using a sequence is no guarentee that you'll end up without gaps! Transactions can be rolled back, etc, just like coffee can be spilt on your chequebook or whatever.
    A cache for the sequence values is useful because it means that Oracle can store in memory the next sequence values, meaning that you cut down on the work needed to be done. If you don't have a cache, this is what happens:
    1. Get the next value from the sequence.
    2. use the value
    3. Get the next value from the sequence.
    4. use the value
    5. Get the next value from the sequence.
    6. use the value
    etc...
    However, if you have a cache of 5, this is what happens:
    1. Get the next 5 values from the sequence and store in memory.
    2. use the first value
    3. use the second value
    4. use the third value
    5. use the fourth value
    6. use the fifth value
    7. Get the next 5 values from the sequence and store in memory.
    8. use the first value
    etc...
    So a cache reduces the amount of calls to the sequence. However, as soon as the memory is wiped (a database bounce, shared_pool flush, etc) then the sequence numbers are gone, and the next time you ask for the next sequence value, it has to go back the sequence.

  • Concept Question about EDI and Error Processing

    Hello All,
    This is a concept question, I was wondering how others would approach this scenario.
    Let's take this scenario. In EDI often the transmission for a purchase order comes in with an invalid material number. Normal error processing is for the EDI team to research the issue (find the correct matnr), edit the idoc and reprocess the Idoc.  I would like to move that type of error processing out of the hands of the EDI folks and into the hands of the business users.
    How would they correct the material number? It has to be more intuitive than editing an IDoc, it has to be an easy and intuitive user interface. This process has to also work for many types of EDI transmissions both inbound and outbound.
    There would be many ways to handle this: create a report with an editable grid that would change the idoc during input and create the order with the corrections, convert the idoc to xml and use simple transformations, use an xsl report...etc.  You get the point. 
    How would you approach this?  Before I start the design I want to make sure I've given all the options due consideration...all options, webdynpro, transformations, regex, you name it...all is available, except there is no XI. 
    Personally I'm leaning towards an editable grid and processing buttons that would post the idoc in background...but I do that type of thing all the time and I may be in a rut and neglecting better design options.
    Thanks for your input,
    Greg

    <b>Paul:</b> So it runs the transaction silently in BDC format, until the error occurs, then opens up in dialogue to allow the user to change the invalid material, and then continue on with processing.
    This works when the processing function module uses BDC. But even then I think this is possibly nice from user perspective, but a nightmare from auditing perspective. I.e. correct me if I'm wrong, but I'm pretty sure there's no log indicating that the user changed the material number. Thus for anybody comparing the IDoc contents against the posted document (including change history) there's no trail that shows this change. Of course you can assume that this is what must have happened, but I personally prefer if I can track in the system what happened and have proof for that.
    <b>Reddy:</b>
    <ol>
    <li>it can be run daily basis, which should select all idocs which are in status 51 with message number (..related to wrong material number). report output should inlcude :idoc number-wrong mat no- space for new material to be entered by business against wrong 1. And there should be one button for RUN.</li>
    <li>After RUN, the material number should be changed to new 1 in segments and idocs should be reprocessed.</li>
    <li>repeat the run until business enter right mat num.</li>
    </ol>
    Design seems to limited to me (takes care only of one error message). Might work if that's the main pain point and this is the only one the user is dealing with. Otherwise I'd expect pretty soon they start complaining about having to use different tools for the possible errors. I'd keep the report more general, but allow this special form of processing only for a given error message (otherwise it's a normal re-process as triggered for example via BD87).
    Also, I assume that when you talk of changing the IDoc you mean that you actually keep an original copy around (like SAP does when you edit an IDoc). Often this is required from an auditing perspective. I'm not sure why you wouldn't want to check the material number <em>before</em> trying to process the IDoc to avoid wasting system resources (but maybe I misunderstood the step).
    Anyhow, in theory you could also achieve all of this via workflow. You can add custom columns to the work item overview in the inbox, only issue here is that it doesn't scale well (so issues with larger volumes).

  • Question about cache

    hi all
    we just upgrade the apex from version 1.6.1.00.03 to version 3.1.2.00.02
    because this upgarde was a massive upgrade i need to do some tests to my application.
    now i've noticed that in the new version there is a cache option in the region,edit attribute of page etc..
    i want to ask : since there is no option of cache in the old version ,
    and the application in the new version it's an import fom the old version .
    (i did export from the old and import to the new) ,
    i want to know is the default of the apex 3.1 is to do cache to this pages ?.
    or the default is page,region not cache unless i decided so?.
    or the page and the region default cache are change from each other ?
    this is important for me to know , because if the default of the application is cache, i need to go every page and to change it.
    second question is:
    let's say that the pages are cache , is that say that if i have process in this page (after submit,before header etc.) they do not perform?
    thanks in advance
    Naama

    Hi Scott,
    In her first post, Naama is talking about page processes, in general, and mentioned ‘after submit’ and ‘before header’ – “is that say that if i have process in this page (after submit,before header etc.) they do not perform?” – and your response was also general – “That's correct”. I agree that for cached pages, the Show related processes, like ‘before header’, are not running, but the Accept processes, like ‘after submit’? Aren’t they fired regardless of the cache status?
    Thanks,
    Arie.

  • Question about "missing glyph" error in CS4/CS5 live preflight

    Hi all
    Basically I'm wondering what the "missing glyph" error refers to specifically and what are the implications of such an error. The help file doesn't mention it, as far as I can see.
    I have some PDFs (from WordPerfect) which are causing this error when imported into Indesign. I think the pages which cause this error have been corrected with a PDF editor (Iceni Infix) but Acrobat reports all the fonts are embedded and subsetted. A third-party preflight programs reports everything is okay too.
    Am I safe to ignore these errors?
    thanks,
    Iain

    I'm not sure "Missing Glyph" picks up missing glyphs in PDFs. What does ID point to when you click the page number hyperlink in the Preflight panel?
    If you have missing glyphs in your document text, they will look something like this:
    -- then again, perhaps you cannot see the pink highlight because of background (an image) or foreground (another character 'on top of it', or perhaps even a graphic over the text).
    How serious this is? It's ... critical! As you can see in my image, the only thing InDesign can do is notify you with the pink highlight and temporarily insert the font's own "missing" image (which may be a rectangle, a question mark, or even just a space).
    Fortunately, if you do need the character, all you have to do is change the font (for those characters only) to one that does contain the missing ones. You'll have to try all your system fonts before you find one that does and matches the rest of your text as well :-)
    There are a few special cases where InDesign gets it wrong, though. The Symbol font, for example, might not get imported correctly from Word, and in that case you have to manually change the character -- i.e., find the correct one in the Glyphs panel and insert it from there. There is also the Strange Case of the Invisible Marker: InDesign thinks there is something but if you check against the original Word file, there is nothing. This will usually insert a character code U+FFFD; you can safely delete them.

  • Simple Question About Using "Group by" Inside the Oracle XE Query Builder

    Hi,
    I am a new user of Oracle 10g XE and I have built and populated some tables. I am trying to create a view (make a query) via using the Query Builder. I have chosen two attributes, say course_section_ID and trainer_ID in the same table. I choose the "COUNT" function for course_section_no and I check the box for "Group By" with trainer_ID. (I would like to count the number course sections each trainer is teaching). Then I "run" the query and the same error message appears:
    fail to parse SQL query:
    ORA-00904: "COURSE_SECTION"."TRAINER_ID": invalid identifier
    Both attribute names should be valid (as shown above).
    If I only choose course_section_ID and do a COUNT on it, it gives the same error message on course_section_no.
    I did try to do the same thing with the demo HR database. There were no problems with counting a field nor with grouping on a field with HR.
    PLEASE HELP!
    Thanks.

    I have got it. When all the attribute names are in the uppercase, then I can do aggregate functions and "group by" with the GUI.

  • Question about SNMPv3 group/user configuration

    I'm trying to do configure SNMPv3 on a 2811 router for the following:
    - group ADMIN should allow user access from 10.10.1.0/24
    - user IT is on group ADMIN, using MD5 authentication
    Is the following the right configuration? should I configure engine-ID? should I put "access 1" to the "snmp-server user" command?
    snmp-server group ADMIN v3 priv read READMIBS write WRITEMIBS access 1
    snmp-server user IT ADMIN v3 auth md5 password
    access-list 1 permit 10.10.1.0 0.0.0.255

    Now I did pull off a way to hide groups based on the step contextuly without different profilesWould you mind sharing this with the community? It might be helpful for the others, and it could also help to move your question forward.
    I am trying to imagine:
    - one place where you can add idocScript to rules&profiles is Rules Activation Condition. Was this you way?
    - if so, you could have several rules, under a single profile (my previous answer was a bit imprecise in that respect), and within the activation condition you can specify when you want the rule to apply, and in the rule itself you can then specify what fields, and whether a field should be editable or read-only, etc. Note that you can have multiple rules defining usage of the same metadata field - it is more transparent if activation conditions are mutually excluding
    I'm still struggling with your pressure on showing/hiding groups. Could you describe it on a real-life example? Also, could you describe what is your way to display metadata within a workflow process?
    My real example is as follows:
    - an invoice document is scanned by a receptionist. It could be an A/P invoice for paying raw material, or office supplies (I think they had three major categories). Depending on that, the receptionist sends it to the appropriate department (by filling an option-list metadata)
    - the head of department either accepts it, or sends it to the correct department. By accepting, he or she also fills in the responsible clerk to process the invoice (also an option-list)
    - the clerk is no longer allowed to modify Dept and RespPerson metadata, but fills in other metadata like Supplier, Invoice Amount, etc. - those fields are not visible to previous reviewers at all
    - etc.

  • Simple question about tabular forms: error in MRU

    hi to all,
    i write here to check if what i think about tabular forms is confirmed.
    If i build a master- detail or tabular form and the underlying table are modified, the page where the tabulare form is contained doesn't work anymore.
    if what i've written above is correct, is there a way to realign the fields of the tabular forms to the db fields and make my page work again without rebuilding all?
    Can i modify something, somewhere inside apex to make my page run again?
    The error i get (in italian / english)
    Error in mru internal routine: ORA-20001: Errore in MRU: riga= 0, ORA-20001: ORA-20001: La versione corrente dei dati nel database è cambiata da quando l'utente ha iniziato il processo di aggiornamento. checksum corrente = "ED73B05FA6016F8D5F3B4B5B69AF482D", checksum elementi = "CFD72DCC4221A340057D654B54EA7A04"., update "NEWPROJ"."CNT_VAL_SEG_DEMO" set "ID" = :b1, "FK_CNT" = :b2
    Means: The version of data in db (fields of the tables?) has changed since the user started the update process.
    By the way this sentence under my opinion can trick you as
    in italian 'La versione corrente dei dati nel database'
    can means both that data contained in tables have changed (dml error / process error / data the user see not are the real underlying data). Can mean as well the field definition is changed, however a more meaningful message in italian is: ' la definizione delle tabelle usate dall'oggetto tabular form é cambiata'.
    That translated in english is ' DDL of table/s used by tabular form has changed'.
    thanx a lot
    Message was edited by:
    Marcello Nocito

    HI Heinz,
    yes is now clear, however is a pity that a litte change in the query has a lot of
    implications: a change in the structure of DB means new feature / the data model is not alligned with current requirement. So usually (but not always) it means changing the SELECT as well.
    Such Apex is a Rad an impression is that if you build a complex forms with several linked reports / field and one day you have to change the query of the TABULAR FORM / MASTER DETAIL generated with report you get a lot of rebuilding.
    I think i will avoid the use of this 2 object in the future and use more htmldb.items in the select.
    bye bye

  • Flash Professional Question: About the compiler error window

    Sorry if this is the wrong forum however I was unable to post a question to the Flash Professional forum for some wierd reason. Basically my question is, when I get an error in my console in Flash Professional is there an elegant way of zooming to my intended editor, Flash Builder? The built in code viewer is useless and a hassle to use.

    Launch the movie Flash Builder:
    Run > Test Movie will launch in Flash Pro and give compile errors in Flash Builder
    Run > Debug As.. will start a debug session (in browser or AIR) and you can debug it from Flash Builder (runtime errors, breakpoints, etc.)
    -Aaron

Maybe you are looking for