TimesTen Error :5126: A system managed cache group cannot contain non-stand

I have created few cache groups in TimesTen.
And when i fired script to create fillowing cache group its gives me error.
please see the details mentioned below.
CREATE USERMANAGED CACHE GROUP C_TBLSSTPACCOUNTINGSUMMARY
AUTOREFRESH MODE INCREMENTAL INTERVAL 5 SECONDS STATE ON
FROM
schema.tablename
ACCOUNTINGSUMMARYID DOUBLE,
INPUTFILENAME VARCHAR(255) NOT NULL,
ACCOUNTINGSTATUS VARCHAR(255) NOT NULL,
RECORDCOUNT DOUBLE,
LASTUPDATEDDATE DATE NOT NULL,
CREATEDATE TIMESTAMP DEFAULT SYSDATE,
PRIMARY KEY (ACCOUNTINGSUMMARYID),
PROPAGATE);
then error occurs like==>
5121: Non-standard type mapping for column JISPRATCORBILLINGDEV501.TBLSSTPACCOUNTINGSUMMARY.ACCOUNTINGSUMMARYID, cache operations are restricted
5126: A system managed cache group cannot contain non-standard column type mapping
The command failed.

If you have access to Oracle Metalink, Please take a look at the Note:367431.1
Regards,
Sabdar Syed.

Similar Messages

  • User managed cache group

    Hi,
    I have created user managed cache group as follows :
    create usermanaged cache group writewherecache
    AUTOREFRESH
    MODE INCREMENTAL
    INTERVAL 30 SECONDS
    STATE ON
    from interchange.writewhere
    (PK NUMBER NOT NULL primary key,
         ATTR VARCHAR2(40),PROPAGATE)
    where (interchange.writewhere.pk between '105' and '106');
    oracle have 5 rows in table but now from TT 'select * from interchange.writewhere ' statement doesnot show any result.
    what is the problem?
    Edited by: user11969173 on Nov 4, 2009 2:30 AM

    ttmesg.log showing as follows:
    17:34:00.91 Info: ORA: 3049: ora-3049-1077582144-lMarker01387: Datastore: CACHEGENI Log Table Marker marked 0 rows of log table TT_05_87616_L with logseq 2 through 2
    17:34:01.83 Info: ORA: 3049: ora-3049-1107204416-refresh04075: Datastore: CACHEGENI Starting autorefresh number 2092 for interval 5000ms
    17:34:01.83 Info: ORA: 3049: ora-3049-1107204416-refresh04097: Datastore: CACHEGENI Autorefresh thread for interval 5000ms is connected to instance geni11g on host isgcent216. Server handle 46918156651896
    17:34:01.85 Info: ORA: 3049: ora-3049-1107204416-lMarker01387: Datastore: CACHEGENI Log Table Marker marked 0 rows of log table TT_05_87616_L with logseq 2 through 2
    17:34:01.87 Info: ORA: 3049: ora-3049-1107204416-refresh04762: Datastore: CACHEGENI Cache agent refreshed cache group CACHEUSER.READCACHE: Number - 2092, Duration - 0ms, NumRows - 0, NumRootTblRows - 0, NumOracleBytes - 0, queryExecDuration - 0ms, queryFetchDuration - 0ms, ttApplyDuration - 0ms, totalNumRows - 0, totalNumRootTblRows - 0, totalNumOracleBytes - 0, totalDuration - 0ms
    17:34:01.87 Info: ORA: 3049: ora-3049-1107204416-refresh04824: Datastore: CACHEGENI Autorefresh number 2092 finished for interval 5000ms successfully
    17:34:01.87 Info: ORA: 3049: ora-3049-1107204416-fresher01709: Datastore: CACHEGENI Autorefresh number 2092 succeeded for interval 5000 milliseconds
    17:34:05.09 Info: ORA: 3049: ora-3049-1105090880-eporter00385: Datastore: CACHEGENI object_id 89922, bookmark 1
    17:34:05.09 Info: ORA: 3049: ora-3049-1105090880-eporter00385: Datastore: CACHEGENI object_id 89832, bookmark 6
    17:34:05.10 Info: ORA: 3049: ora-3049-1105090880-eporter00385: Datastore: CACHEGENI object_id 87616, bookmark 1
    17:34:05.93 Info: ORA: 3049: ora-3049-1077582144-lMarker01387: Datastore: CACHEGENI Log Table Marker marked 0 rows of log table TT_05_87616_L with logseq 2 through 2
    17:34:06.83 Info: ORA: 3049: ora-3049-1107204416-refresh04075: Datastore: CACHEGENI Starting autorefresh number 2093 for interval 5000ms
    17:34:06.83 Info: ORA: 3049: ora-3049-1107204416-refresh04097: Datastore: CACHEGENI Autorefresh thread for interval 5000ms is connected to instance geni11g on host isgcent216. Server handle 46918156651896
    17:34:06.86 Info: ORA: 3049: ora-3049-1107204416-lMarker01387: Datastore: CACHEGENI Log Table Marker marked 0 rows of log table TT_05_87616_L with logseq 2 through 2
    17:34:06.88 Info: ORA: 3049: ora-3049-1107204416-refresh04762: Datastore: CACHEGENI Cache agent refreshed cache group CACHEUSER.READCACHE: Number - 2093, Duration - 0ms, NumRows - 0, NumRootTblRows - 0, NumOracleBytes - 0, queryExecDuration - 0ms, queryFetchDuration - 0ms, ttApplyDuration - 0ms, totalNumRows - 0, totalNumRootTblRows - 0, totalNumOracleBytes - 0, totalDuration - 0ms
    17:34:06.88 Info: ORA: 3049: ora-3049-1107204416-refresh04824: Datastore: CACHEGENI Autorefresh number 2093 finished for interval 5000ms successfully
    17:34:06.88 Info: ORA: 3049: ora-3049-1107204416-fresher01709: Datastore: CACHEGENI Autorefresh number 2093 succeeded for interval 5000 milliseconds
    17:34:10.91 Info: ORA: 3049: ora-3049-1077582144-lMarker01387: Datastore: CACHEGENI Log Table Marker marked 0 rows of log table TT_05_87616_L with logseq 2 through 2
    17:34:11.83 Info: ORA: 3049: ora-3049-1107204416-refresh04075: Datastore: CACHEGENI Starting autorefresh number 2094 for interval 5000ms
    17:34:11.83 Info: ORA: 3049: ora-3049-1107204416-refresh04097: Datastore: CACHEGENI Autorefresh thread for interval 5000ms is connected to instance geni11g on host isgcent216. Server handle 46918156651896
    and tterrors.log didnot show any error msg.
    shubha

  • Flusing user managed cache group - Applying filter

    Hi,
    I got a user managed cache group having two tables.
    Table A
    ItemId Number (Primary key)
    ItemName varchar2
    created_date date
    Table B
    ItemID Number (primary key)
    Sale_date Number (primary key)
    Sale_quanity number
    created_date date.
    primary key(itemid, sale_date) ,
    foreign key(itemid) references tablea(itemid)
    When I execute 'flush cache group sample where ItemId=100 and created_date>=trunc(sysdate)", the filter is only applied to the TableA for ItemId and created_date and it is not applied to the Table B for both the columns.
    I traced the session in oracle and found that all the rows in the TABLE B for Item_ID=100 is flushed to the oracle but the additional condition on created_date is not applied on the Table B.
    I was expecting that Timesten would apply the condition "created_date>=trunc(sysdate)" on both Table A and Table B, but it is being applied only on Table A.
    Is this expected behaviour? Is there any way I can force the flushing operation to consider columns from both the tables? Any other workaround?
    Many thanks,
    Regards,
    Raj

    Hi,
    I got a user managed cache group having two tables.
    Table A
    ItemId Number (Primary key)
    ItemName varchar2
    created_date date
    Table B
    ItemID Number (primary key)
    Sale_date Number (primary key)
    Sale_quanity number
    created_date date.
    primary key(itemid, sale_date) ,
    foreign key(itemid) references tablea(itemid)
    When I execute 'flush cache group sample where ItemId=100 and created_date>=trunc(sysdate)", the filter is only applied to the TableA for ItemId and created_date and it is not applied to the Table B for both the columns.
    I traced the session in oracle and found that all the rows in the TABLE B for Item_ID=100 is flushed to the oracle but the additional condition on created_date is not applied on the Table B.
    I was expecting that Timesten would apply the condition "created_date>=trunc(sysdate)" on both Table A and Table B, but it is being applied only on Table A.
    Is this expected behaviour? Is there any way I can force the flushing operation to consider columns from both the tables? Any other workaround?
    Many thanks,
    Regards,
    Raj

  • Group messaging not working with groups that contain non iphone users

    I have an Iphone 5 and for some reason I can't send/receive group messages with groups that contain non Iphone users.  Basically I recieve single text messages/imessage from each person who responds to the group message.  This is incredibly annoying.

    Hello, megbu36. 
    Thank you for visiting Apple Support Communities.
    Check to make sure that group messaging is enabled.  Go to Settings > Messages and turn on Group messaging.
    iOS: Understanding group messaging
    http://support.apple.com/kb/HT5760
    Cheers,
    Jason H.

  • ERROR MESSAGE sap system manager:work process restarted, session terminated

    Hi,
    i am a beginer in SAP administration, users are getting this error message and i have done all my research and not able to resolve this issue. Here are the details
    SAP Version :ideas 4.7
    Database :Oracle
    OS : windows 2003
    Module user is working on MM
    user working on it is a Super user with all the permissions
    SAP is configure to run under the  European date and decimal format.
    I have never done any database administration on it, it is a new install and has been rarely used.
    User creates a RFQ and when he tries saving it , seems like for the first time after either restarting the macine or restarting the service it might work and at time it might not, this is a very sporadic error and most of the times it crashes out with the message "sap system manager:work process restarted, session terminated" and kicks the user out of the session.
    Below are the details of the error message from ST22 :
    name of the runtime error : system_core_dumped
    below are the details of the error message and its resoltion as suggested by sap help :
    ========
    Runtime Errors         SYSTEM_CORE_DUMPED           
           Occurred on     01.02.2008 at 07:52:19
    Process terminated by signal " ".                                             
    What happened?
    The current ABAP program had to be terminated because the                     
    ABAP processor detected an internal system error.                             
    The current ABAP program "SAPLCLSC" had to be terminated because the ABAP     
    processor discovered an invalid system state.                                 
    What can you do?
                                                                                    Make a note of the actions and input which caused the error.                                                                               
    To resolve the problem, contact your SAP system administrator.                                                                               
    You can use transaction ST22 (ABAP Dump Analysis) to view and administer      
    termination messages, especially those beyond their normal deletion           
    date.                                                                               
    Error analysis
    An SAP System process was terminated by an operating system signal.           
                                                                                    Possible reasons for this are:                                                
    1. Internal SAP System error.                                                 
    2. Process was terminated externally (by the system administrator).           
               Last error logged in SAP kernel                                    
                                                                                    Component............ "Taskhandler"                                           
    Place................ "SAP-Server server1_DEV_00 on host server1 (wp 1)"      
    Version.............. 1                                                       
    Error code........... 11                                                      
    Error text........... "ThSigHandler: signal"                                  
    Description.......... " "                                                     
    System call.......... " "                                                     
    Module............... "thxxhead.c"                                            
    Line................. 9555                                                                               
    How to correct the error
    The SAP System work directory (e.g. /usr/sap/c11/D00/work ) often             
    contains a file called 'core'.                                                                               
    Save this file under another name.                                                                               
    If you cannot solve the problem yourself, please send the                     
    following documents to SAP:                                                                               
    1. A hard copy print describing the problem.                                  
       To obtain this, select the "Print" function on the current screen.         
                                                                                    2. A suitable hardcopy prinout of the system log.                             
       To obtain this, call the system log with Transaction SM21                  
       and select the "Print" function to print out the relevant                  
       part.                                                                               
    3. If the programs are your own programs or modified SAP programs,            
       supply the source code.                                                    
       To do this, you can either use the "PRINT" command in the editor or        
       print the programs using the report RSINCL00.                                                                               
    4. Details regarding the conditions under which the error occurred            
       or which actions and input led to the error.                                                                               
    System environment
    SAP Release.............. " "                                                                               
    Application server....... " "                                                 
    Network address.......... " "                                                 
    Operating system......... " "                                                 
    Release.................. " "                                                 
    Hardware type............ " "                                                 
    Character length......... " " Bits                                            
    Pointer length........... " " Bits                                            
    Work process number...... " "                                                 
    Short dump setting....... " "                                                                               
    Database server.......... " "                                                 
    Database type............ " "                                                 
    Database name............ " "                                                 
    Database owner........... " "                                                                               
    Character set............ " "                                                                               
    SAP kernel............... " "                                                 
    Created on............... " "                                                 
    Created in............... " "                                                 
    Database version......... " "                                                                               
    Patch level.............. " "                                                 
    Patch text............... " "                                                                               
    Supported environment....                                                     
    Database................. " "                                                 
    SAP database version..... " "                                                 
    Operating system......... " "                                                 
    User, transaction...
    Client.............. " "                                                      
    User................ " "                                                      
    Language key........ " "                                                      
    Transaction......... "ME41 "                                                  
    Program............. "SAPLCLSC"                                               
    Screen.............. "SAPMM06E 0320"                                          
    Screen line......... 71                                                       
    Information on where termination occurred
    The termination occurred in the ABAP program "SAPLCLSC" in "EXECUTE_SELECT".  
    The main program was "SAPMM06E ".                                                                               
    The termination occurred in line 131 of the source code of the (Include)      
    program "LCLSCF2G"                                                           
    of the source code of program "LCLSCF2G" (when calling the editor 1310).      
    =============
    i even tried increasing the dialog processes but with no use.The same error occurs.
    I appreciate every one of help i can get, i am working on a deadline which is tomorrow evening to resovle this issue, any kind of help is highly appreciated.
    thanks
    mudessir.

    Hi
       follow correction method suggested in this dump,
    " The SAP System work directory (e.g. /usr/sap/c11/D00/work ) often
    contains a file called 'core'.  Save this file under another name."
    have you done this?
    with regards,
    raj.
    <i>pls, award points</i>

  • After BI 4.1 configuration in SOLMAN - error message "The systems you selected do not contain any software components currently supported by E2E Workload Analysis"

    Hi All,
    I'm trying configure E2E monitoring. I configured all necessary steps, I can see my BI 4.1 system in SLD, also in SOLMAN, all “lights” are green. But if I start Root Cause Analysis>> Ent-to End Analysis>> My system>> Workload Analysis, I see this error "The systems you selected do not contain any software components currently supported by E2E Workload Analysis".

    It would be better if you can raise the question at Remote Supportability and Monitoring Tools
    and or comment in the blog post How to generate and consume an E2E trace with BI4.x (for non-SolMan landscapes)
    created by Toby Johnston
    Do go through the note 1871260 - The systems you selected do not contain any software
    components currently supported by E2E Workload analysis for SAP HANA
    Hope they help.

  • System python interactive shell - cannot input non-ascii charecters

    If I open Terminal.app (with default setup, in particular utf-8 is the default encoding) I can input non-ascii characters without any problem, in particular I can input all itelian accented letters. But I can't input such characters in system's python2.6 or python2.5 shells: all I get is Terminal giving a beep error sound.
    I have a custom python version (2.4.4) downloaded from python.org website and with that version on python I don't have any problem.
    Can you help me with this?

    If anyone still has this problem, also with a downloaded Python install from python.org, the solution is to easy_install readline.
    Follow these steps:
    Check if you already have access to the easy_install command from a Terminal.app command line.
    If not, google "python how to install setuptools/distutils". It's pretty easy you may just need to paste one or two terminal commands. This then gives you access to easy_install on the command line.
    In a Terminal.app commandline run the following command: easy_install readline
    This should make the Terminal Python use readline which is a much improved version of the age old libedit library (which is the culprit for this issue).

  • ORA-30931: Element 'document-groups' cannot contain mixed text

    I get this error when I try to create a schema based XML resource as soon as the XML contains utf8 characters.
    declare
    xmlref XMLType;
    result BOOLEAN;
    BEGIN
    SELECT     XMLElement ( "document-groups",
         XMLAttributes ( 'http://www.w3.org/2001/XMLSchema-instance' AS "xmlns:xsi",
    'CCRDocGroupsAndTypes.xsd' AS "xsi:noNamespaceSchemaLocation" ),
         XMLAgg (
         XMLElement ("document-group",
         XMLAttributes (DECODE(aa.aa_id,5,'FAQ',7,'SOFTWARE',2,'WARRANTY') AS "id" ),
         XMLElement ("last-modified-date", '2004-06-23T17:10:20' ),
         XMLElement ( "document-types",
         XMLAgg (
         XMLElement ("document-type",
         XMLAttributes (dtt.dt_cd AS "id" ),
         XMLElement ("description", dtt_desc) ,
         XMLElement ("language",
         XMLAttributes ( lang_cd AS "id" ) ) ) ) ) ) ) )
    INTO      xmlref
    FROM     m_doctype_translation      dtt
    ,     m_document_type          dt
    ,     m_application_area     aa
    WHERE     dt.dt_cd     = dtt.dt_cd
    AND     aa.aa_id     = dt.aa_id
    --AND     dtt.lang_cd      = 'ENG'
    AND     aa.aa_id     IN (2,5,7)
    GROUP     BY aa.aa_id
    result := DBMS_XDB.createResource(
    '/atg/xxx.xml'
    ,xmlref );
    COMMIT;
    END;
    Any ideas?

    Can you post the instance document generated by your Select Statement..

  • Problem creating cache group for a table with data type varchar2(1800 CHAR)

    Hi,
    I am using TimesTen 7.0 with Oracle 10.2.0.4 server. While creating Cache Group for one of my table I'm getting the following error.
    5121: Non-standard type mapping for column TICKET.DESCRIPTION, cache operations are restricted
    5168: Restricted cache groups are deprecated
    5126: A system managed cache group cannot contain non-standard column type mapping
    The command failed.
    One of my filed type in oracle table is Varchar2(1800 CHAR). If I change the filed size to <=1000 it (E.g. Varchar2(1000 CHAR)) then the Create Cache command works fine.
    MyDatabase Character Set is UTF8.
    Is it possible to solve without changing the filed size in the Oracle Table?
    Request your help on this.
    Thanks,
    Sunil

    Hi Chris.
    The TimesTen server and the Oracle Client is installed on a 32-bit system.
    1. ttVersion
    TimesTen Release 7.0.5.0.0 (32 bit Linux/x86) (timesten122:17000) 2008-04-04T00:09:04Z
    Instance admin: root
    Instance home directory: /appl/TimesTen/timesten122
    Daemon home directory: /var/TimesTen/timesten122
    Access control enabled.
    2. Oracle DB details
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
    PL/SQL Release 10.2.0.3.0 - Production
    CORE 10.2.0.3.0 Production
    TNS for Linux: Version 10.2.0.3.0 - Production
    NLSRTL Version 10.2.0.3.0 - Production
    Oracle Client - Oracle Client 10.2.0.4 running in a 32 bit Linux/x86
    3. ODBC Details
    Driver=/appl/TimesTen/timesten122/lib/libtten.so
    DataStore=/var/TimesTen/data
    PermSize=1700
    TempSize=244
    PassThrough=2
    UID=testuser
    OracleId=oraclenetservice
    OraclePwd=testpwd
    DatabaseCharacterSet=UTF8
    Thanks,
    Sunil

  • Suggestions required for Read-only cache group in timesten IMDB cache

    Hi
    In IMDB Cache , If the underlying oracle RAC is having two schemas ( "KAEP" & "AAEP" , having same sturcture and same name of objects ) and want to create a Read-only cache group with AS pair in timesten.
    Schema                                              
        KAEP  
    Table  
        Abc1
        Abc2
        Abc3                                    
    Schema
        AAEP
    Table
        Abc1
        Abc2
        Abc3
    Can a read-only cache group be created using union all query  ?
    The result set of the cache group should contain both schema records in timesten read-only cache group will it be possible ?
    Will there be any performance issue?

    You cannot create a cache group that uses UNION ALL. The only 'query' capability in a cache group definition is to use predicates in the WHERE clause and these must be simple filter predicates on the  tables in the cache group.
    Your best approach is to create separate cache groups for these tables in TimesTen and then define one or more VIEWS using UNION ALL in TimesTen in order to present the tables in the way that you want.
    Chris

  • OnCommand System Manager recieves error 500

    Good day, i am running OnCommand System Manager ver 3.1.1 on Windows. Today, for the very first time, i have seen this issue: i can connect to my 3210 running DataONTAP 8.0.3P2 7-mode, but when i try to reach my new 2240 running DataONTAP 8.1.3P3 7-mode i recieve an error 500 "connection refused". I have found this workaround: on the 2240s i have issued the command >options httpd.admin.enable on ;after this the OnCommand System Manager probably still tries a secure connection, on the console i see errors like [hostname: HTTPPool03:warning]: HTTP XML Authentication failed from MyClientIP .  But now i guess OnCommand System Manager falls back to a non secure connection, i see the question "do you want to set up a secure connection or continue without...", i answer "continue without" and i'm able to manage my filers again. What's happened? Maybe something java updates related? Thanks in advance.
    Alessandro  

    While removing newer version of Java and installing older versions probably fixes this in most cases, do you really want to run version of software that have known vulnerabilities in them? I think that companies like NETAPP, EMC, DELL, HP, etc, etc., need to be accountable for staying current.  They need to upgrade the applications regularly to stay compatible with the platforms they develop in.  The days of write it once and forget it are long gone.  The threat vectors have changed and continue to change on a daily basis. If I had machine that was dedicated to doing nothing other managing storage, network and servers, that never saw any portion of the production network and was isolated 100% from the internet, perhaps leaving archaic versions of depreciated software out there would be an option.  The days of doing business this way are also long gone. Cannot speak for everyone of course, but I don’t have the real-estate on my desk and have no desire to run up down the hall to my MDF every time I want to manage something in the environment.   

  • How to load cache group?

    Dear ChrisJenkins,
    My project has a timesten . There is a table (using read only cache group) in timesten.
    ex :
    create table A as (id number, content varchar(20));
    insert into A values (1, 'a1');
    insert into A values (2, 'a2');
    insert into A values (n, 'an');
    commit;
    The table (A) loaded 10 rows ('a1' --> 'a10'). if I execute the sql following :
    "Load cache group A where id >=2 and id <=11"
    , how will the timesten execute the sql above ?
    I suggest :
    the timesten don't load rows (id=2-->10) because the memory has the rows ,
    the timesten only load rows (id=11) because the memory don't has the row
    Is it true ?
    Thanks,rgds
    TuanTA

    In your example you are using a regular table not a readonly cache group table. If you are using a readonly cache group then the table would be created like this:
    CREATE READONLY CACHE GROUP CG_A
    AUTOREFRESH MODE INCREMENTAL INTERVAL 10 SECONDS STATE PAUSED
    FROM
    ORACLEOWNER.A ( ID NUMBER, CONTENT VARCHAR(20));
    This assumes that the table ORACLEOWNER.A already exists in Oracle with the same schema. The table in TimesTen will start off empty. ALso, you cannot insert, delete or update the rows in this table directly in TimesTen (that is why it is called a READONLY caceh group); if you try you will get an error. All data for this table has to originate in Oracle. Let's say that in Oracle you now do the following:
    insert into A values (1, 'a1');
    insert into A values (2, 'a2');
    insert into A values (10, 'a10');
    commit;
    Stilll the table in TimesTen is empty. We can load the table with the data from Oracle using:
    LOAD CACHE GROUP CG_A COMMIT EVERY 256 ROWS;
    Mow the table in TimesTen had the same rows as the table in Oracle. Also, the LOAD operation changes the AUTOREFRESH state from PAUSED to ON. You still cannot directly insert/update and delete to this table in TimesTen but any data changes arising due to DML executed on the Oracle table will be captured and propagates to TimesTen by the AUTOREFRESH mechanism. If you now did, in Oracle:
    UPDATE A SET CONTENT = 'NEW' WHERE ID = 3;
    INSERT INTO A VALUES (11, 'a11');
    COMMIT;
    Then, after the next autorefresh cycle (every 10 seconds in this example), the table in TimesTen would contain:
    1, 'a1'
    2, 'a2'
    3, 'NEW'
    4, 'a4'
    5, 'a5'
    6, 'a6'
    7, 'a7'
    8, 'a8'
    9, 'a9'
    10, 'a10'
    11, 'a11'
    So, your question cannot apply for READONLY cache groups...
    If you used a USERMANAGED cache group then your question could apply (as long as the cache group was not using AUTOREFRESH and the table had not been marked READONLY). In that case a LOAD CACHE GROUP cpmmand will only load qualifying rows that do not already exist in the cache table in TimesTen. If rows with the same primary key exist in Oracle they are not loaded, even if the other columns have different values to those in TimesTen. Contrast this with REFRESH CACHE GROUP which will replace all matching rows in TimesTen with the rows from Oracle.
    Chris

  • System.Management.Automation.MethodInvocationException: Exception calling "ExecuteQuery" with "0" argument(s): "$Resources:core,ImportErrorMessage;" --- Microsoft.SharePoint.Client. ServerException: $Resources:core,ImportErrorMessage;

    Hi,
    I am getting an error  System.Management.Automation.MethodInvocationException: Exception calling "ExecuteQuery" with "0" argument(s): "$Resources:core,ImportErrorMessage;" ---> Microsoft.SharePoint.Client. ServerException:
    $Resources:core,ImportErrorMessage;
    Following is my powershell script on line
    $context.ExecuteQuery(); it is throwing this error.
    function AddWebPartToPage([string]$siteUrl,[string]$pageRelativeUrl,[string]$localWebpartPath,[string]$ZoneName,[int]$ZoneIndex)
        try
        #this reference is required here
        $clientContext= [Microsoft.SharePoint.Client.ClientContext,Microsoft.SharePoint.Client, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c]
        $context=New-Object Microsoft.SharePoint.Client.ClientContext($siteUrl)
        write-host "Reading file " $pageRelativeUrl
        $oFile = $context.Web.GetFileByServerRelativeUrl($pageRelativeUrl);
        $limitedWebPartManager = $oFile.GetLimitedWebPartManager([Microsoft.Sharepoint.Client.WebParts.PersonalizationScope]::Shared);
        write-host "getting xml reader from file"
        $xtr = New-Object System.Xml.XmlTextReader($localWebpartPath)
         [void] [Reflection.Assembly]::LoadWithPartialName("System.Text")
        $sb = new-object System.Text.StringBuilder
             while ($xtr.Read())
                $tmpObj = $sb.AppendLine($xtr.ReadOuterXml());
             $newXml =  $sb.ToString()
        if ($xtr -ne $null)
            $xtr.Close()
        #Add Web Part to catalogs folder
        write-host "Adding Webpart....."
        $oWebPartDefinition = $limitedWebPartManager.ImportWebPart($newXml);
        $limitedWebPartManager.AddWebPart($oWebPartDefinition.WebPart, $ZoneName, $ZoneIndex);
    $context.ExecuteQuery();
        write-host "Adding Web Part Done"
        catch
        write-host "Error while 'AddWebPartToPage'" $_.exception| format-list * -force
    ERROR:
    Error while 'AddWebPartToPage' System.Management.Automation.MethodInvocationException: Exception calling "ExecuteQuery" with "0" argument(s): "$Resources:core,ImportErrorMessage;" ---> Microsoft.SharePoint.Client.
    ServerException: $Resources:core,ImportErrorMessage;
       at Microsoft.SharePoint.Client.ClientRequest.ProcessResponseStream(Stream responseStream)
       at Microsoft.SharePoint.Client.ClientRequest.ProcessResponse()
       at Microsoft.SharePoint.Client.ClientContext.ExecuteQuery()
       at ExecuteQuery(Object , Object[] )
       at System.Management.Automation.DotNetAdapter.AuxiliaryMethodInvoke(Object target, Object[] arguments, MethodInformation methodInformation, Object[] originalArguments)
       --- End of inner exception stack trace ---
       at System.Management.Automation.DotNetAdapter.AuxiliaryMethodInvoke(Object target, Object[] arguments, MethodInformation methodInformation, Object[] originalArguments)
       at System.Management.Automation.DotNetAdapter.MethodInvokeDotNet(String methodName, Object target, MethodInformation[] methodInformation, Object[] arguments)
       at System.Management.Automation.Adapter.BaseMethodInvoke(PSMethod method, Object[] arguments)
       at System.Management.Automation.ParserOps.CallMethod(Token token, Object target, String methodName, Object[] paramArray, Boolean callStatic, Object valueToSet)
       at System.Management.Automation.MethodCallNode.InvokeMethod(Object target, Object[] arguments, Object value)
       at System.Management.Automation.MethodCallNode.Execute(Array input, Pipe outputPipe, ExecutionContext context)
       at System.Management.Automation.ParseTreeNode.Execute(Array input, Pipe outputPipe, ArrayList& resultList, ExecutionContext context)
       at System.Management.Automation.StatementListNode.ExecuteStatement(ParseTreeNode statement, Array input, Pipe outputPipe, ArrayList& resultList, ExecutionContext context)
           

    Thanks Sethu for your comments. However i am running this powershell directly on server so believe
    SharePointOnlineCredentials is not required.
    I have tried it but still giving me same error

  • Cpu usage high when loading cache group

    Hi,
    What are the possible reasons that results high cpu usage when loading read-only cache group with big root table (~ 1 million records)? I have tried setting Logging=0 (without cache agent), 1 or 2 but it doesn't help. Are there any other tuning configuration required to avoid high cpu consumption?
    ttVersion: TimesTen Release 6.0.2 (32 bit Solaris)
    Any help would be highly appreciated. Thanks in advance.

    High CPU usage is not necessarily a problem as long as the CPU is being used to do useful work. In that case high CPU usage shows that things are being processed taling maximum advantage of available CPU power. The single most common mistake is to not properly size the primary key hash index in TimesTen. Whenever you create a table with a PK in TimesTen (whetehr it is part of cache group or just a standalone table) myou must always specify the size of the PK hash index using the UNIQUE HASH ON (pk colukns) PAGES = n clause (see the documentation). n should be set to the maximum number of rows expected in the table / 256. The default is sized for a table of just 4000 rows! If you try and load 1M rows into this table we will be wasting a lot of CPU time serially scanning the (very long) hash chains in each bucket for every row inserted...

  • Refreshing cache group from C/C++ application

    Hi
    anyone knows how to refresh a user-managed cache group from an application developed in C or C++ (ttclasses ) ?
    Please HELP !!!!!!
    lewismm

    Execute the relevant SQL statement ('REFRESH CACHE GROUP cgname' is the most likely one) just like you would any other SQL statement.
    Chris

Maybe you are looking for

  • SENDING EMAIL USING ORACLE9i CLIENT/SERVER

    I HAVE DOWNLOADED 2 UTILITY LIBRARIES D2KCOMN and D2KWUTIL TO MY COMPUTER. THESE 2 LIBRARIES ARE COPIED TO AN ORACLE FORM. ON THE FORM, THERS IS A BUTTON. IN THE WHEN-BUTTON-PRESSED-TRIGGER, I HAVE CODED WIN_API_SHELL.WINEXEC('"C:\PROGRAM FILES\MICRO

  • Zen V: Proble

    About a year ago. (9 months, to be exact), I got a Creative Zen V for my birthday. I was very excited and happy, as I use Creative products and software for building computers and didn't think I'd have much of a problem with my MP3 player. I was wron

  • Creating a Report from Excel

    I'm trying to create a report from an Excel spreadsheet and am having some issues with how Crystal is reading in the data.  I have 7 fields that are all the same, I want them to be Numeric fields.  However, when I import the Excel sheet as the data s

  • Object reference not set message popup

    Dear Fellow Teststand Developers, I am developing tests using the C# adapter. From time to time I get the following message when running the tests: I've been unable to locate the error because it dousn't happen very often. The test keeps running when

  • RuleFrame and BR's that should be handled the PRE or before triggers.

    When a business rule must be handled before INSERT/UPDATE/DELETE of an record, it looks to me like it cannot be implemented using a CAPI. For example the following BR: 'When value of column is NULL use default value retrieved from other column.' Do I