Issue with nqsconfig.ini

HI,
It couldn't reflect in nqsconfig.ini file, while i was disabling cache in EM. Can anyone please clarify my doubt.
Thanks,

Now can you pls correct me if i am wrong that after purging the server cache and * not pruging presentation server cache * IBOT will take the latest data from the database.Yes it will take latest data from DB and your right .If you purge server cache everything is cleaned up and when presentation server is run it should talk to BI server to fetch the details so as to accomodate the presentation server cache as requested.
There is another method to purge cache after every ETL run,you need to create an ibot to fire it and write a custom script so it purges the old data and we need to schedule it after the ETL run...so it cleans up data and when end user comes next day and want to see fresh data it is there to see.It might help you
http://vivekkulthe.blogspot.com/2008/08/purging-cache-using-ibots.html
To improve performance of reports you can do this http://blog.guident.com/2009/12/many-dashboards-on-most-systems-can-be.html
Cheers,
KK

Similar Messages

  • Potential Issue with "desktop.ini"

    Hey everybody, just found out something interesting... In the past I have written code that would look at a directory and use the first *.ini file it found there. However, I have discovered that Windows 7 can and does create a hidden desktop.ini file in nearly any directory it feels like. This has the potential to cause a whole lot of problems, most notably a File Error 8 if you try to modify the thing.
    In any case, heads-up ya'll (or to my friends in the Pittsburgh area "younz")...
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

    So, here's what I've learned so far.
    Fresh reboot.  Works fine.  Close and try again, doesn't work.  This you already know.
    Here's what I tried to see if it makes a difference:
    I removed:
    ~/Library/Caches
    ~/Library/Preferences/com.apple.finder.*, com.apple.systempreferences.* and com.apple.desktop.*
    That didn't help.
    I tried it in safe mode.  Same issue.
    Here's something interesting I noticed.  If you open terminal and killall Finder, that refreshes it and makes it work again... for 1 try.
    This does, in fact, seem like a bug.
    For reference, I'm currently on a mid 2012 macbook pro.  Running fully updated on mt lion.

  • Need help on resolving the issue with adobe output server - error MSG256 & MSG 210 not in .ini file

    Hi,
    I am using adobe output designer 5.5 for designing the label template and using the Adobe output server for printing process.
    In the Jfmerge.ini we given the condition "DiscardUnknownFields=Yes" for ignoring the unwanted fields in the .dat file.
    During the process, I faced some issue with the output server in printing the labels.
    When the .dat file is placed in the Data folder of adobe, the label is not getting printed in the printer.
    The file is move to the error folder and an error file is getting generated which contains the error message as given below:
    090826 02:59:02 D:\Program Files\Adobe\Central\Bin\jfmerge: [256]** Message Msg256 not in .ini file **
    090826 02:59:02 D:\Program Files\Adobe\Central\Bin\jfmerge: [210]** Message Msg210 not in .ini file **
    2009/08/26 02:59:02 D:\Program Files\Adobe\Central\Bin\jfserver.exe: [314]Agent exit message: [210]** Message Msg210 not in .ini file **
    The output server is a new installtion and I verified the Jfmerge.ini file. It contains the message details of Msg256 and Msg210.
    I also verified the license and it is a valid licence.
    Kindly help me out in solving this issue.
    Thanks
    Senthil

    I assume this is too late to help you, but other might need a hint.  I had the same problem, and found some possible causes that I posted on http://codeznips.blogspot.com/2010/02/adobe-output-server-message-msg210-not.html.
    It is quite likely that you are missing some double quotes around the path specifying the ini file (-aii), if its installed under "Program Files".
    Hope this helps anyone....
    Vegard

  • OBIEE10g: Even after changing NQSConfig.INI it still loads the old rpd.

    New rpd name has been updated in NQSConfig.INI file and the services have been restarted manually. Still in the presentation services, i get the old RPD.
    The log file of both the NQServer.log and <NewRPD>.rpd.Log says that new rpd has been loaded.
    Is there anything that I am missing? Is there any way to save the new RPD directly as Online mode?
    I am using OBIEE 10g as a standalone on Windows xp.
    Thanks,
    Prem.

    The rpd doesnt come up in the list of available Online rpd's.
    I did check the log file again (changing the rpd name), it shows as loaded and also the subject area under it.
    Still stuck with this issue.
    Regards,
    Prem.

  • Issue with parallel operation of SAP NW SSO 2.0 and SNC Client Encryption (Logon Groups)

    Hi!
    One of our customers is using the SNC Client Encryption solution to ensure encryption using SNC (based on Kerberos Technology) for their SAP GUI Dialog connections. They have lots of SAP backends DEV, QAS, PRD all with the SNC Client Encryption SNC Lib installed. The profile parameter snc/identity/as contains the following value: p:CN=SAP/<ServiceAccount>@<DOMAIN>.
    Example: p:CN=SAP/[email protected]
    The customer is using one AD Service Account "SNCServiceUser" with one registered SPN "SAP/SNCServiceUser" for all systems (yes, this is not recommended... but the case).
    Important: All users use group entries in the SAP Logon (saplogin.ini). Means, for SAP logon the SNC name can not be manually configured on the SAP Front End. With group logons, the application server's SNC name is dynamically requested by the message server each time a SAP GUI connection is started. The SNC Name is greyed out in this case as dynamically obtained from the applications servers profile parameter snc/identity/as.
    Now our customer implements SAP NetWeaver Single Sign-On 2.0 within his landscape. Based on the Secure Login Server 2.0 (SP3) he likes to use X.509 based authentication to his AS ABAP backends using SAP GUI SNC while others still use SNC Client Encryption.
    Replacing the SNC Library on the AS ABAP
    The Secure Login Library 2.0 (SP3) has been installed on one of the ABAP systems and the SNC Client Encryption SNC Library (which is based on SSO 1.0) is no longer used, thus we changed the parameter snc/gssapi_lib to point to the new SNC library. We removed the old PSE.ZIP containing the keytab and created the new SAPSNCSKERB.PSE incl. the keytab and proper credentials. To ensure parallel operation, we kept the snc/identity/as value as is =  p:CN=SAP/[email protected].
    After restarting the system with initialized Secure Login Library 2.0, still the SNC client encryption works fine for existing users.
    The problem
    We created on the Secure Login Server an SNC certificate for the AS ABAP which has the following X.509 Distinguised Name Fomat: CN=SAP/[email protected] This is to avoid having to change the snc/identity/as to an "real" X.509 DN which would lead to non-working SNC Client Encryption for all the other users using SAP GUI and logon groups.
    As soon as we install the PSE via STRUST on the system the SNC Client Encryption solution stops working with error „Server refuses kerberos key exchange“.
    As part of an pilot implementation we have installed Secure Login Client 2.0 (SP3) on some test PCs. The test PC with SLC is able to perform Single Sign-On with SNC based on X.509 (incl. Encryption) to the ABAP system.
    Seems the SAP System now only tries to do X.509 based authentication thus key exchange fails. The problem is, we cannot change the snc/identity/as value because of the logon groups. If we were able to do so, we would in any case set the server identity to X.509 DN and in addition create the SAPSNCSKERB.PSE incl. keytab. This should work, as confirmed by SAP see this post.  
    Any ideas how to solve this and have both solutions in parallel?
    Appreciate any help.
    Regards,
    Carsten

    Hi all,
    we was able to fix the issue. It was an issue with the customers cluster configuration and the  $SECUDIR variable. This tricky issue leads to non working or sporadic working SNC Client Encryption...
    This was how the configuration looks before:
    Environment variable $SECUDIR is defined:
    "/ABCDEF<SID>/usr/sap/<SID>/DVEBMGSxx/sec“
    sapgenpse seclogin -l -v
    running seclogin with USER="<SID>adm"
    Credentials for username '<SID>adm':
    0 (LPS:OFF):
             (LPS:OFF): /ABCDEF<SID>/usr/sap/<SID>/DVEBMGSxx/sec/SAPSNCSKERB.pse
    1 (LPS:OFF):
             (LPS:OFF): /usr/sap/<SID>/DVEBMGSxx/sec/SAPSNCS.pse
    After changing the $SECUDIR to "/usr/sap/<SID>/DVEBMGSxx/sec“ and re-creating the credentials, it worked like a charm.
    As a result of this we can confirm, this configuration and SNC Client Encryption works with CommonCryptoLib in parallel to the SSO configuration.
    And Valerie was right with 2. SLC starting from V. 1.0 SP2 PL3 was able to convert the CN= part of the SNC Name into an SPN, was my mistake. In addition SNC Client Encryption starting from Version 1 SP1 PL1 does this also.. just to make this clear
    Thread closed hope this helps someone
    Carsten

  • Performance issue with Oracle data source

    Hi all,
    I've a rather strange problem that I'm stuck on need some assistance on.
    I have a rules file which drags data in via an SQL data source thats an Oracle server. If I cut/paste the 3 sections of "select" "from" and "where" into SQL-Developer and run the query, it takes less than 1 second to complete. When I run the "load data" with this rule file or even use the "Retrieve" with the rules file edit, it takes up to an hour to complete/retrieve the data.
    The table in question being used has millions of rows and I'm using one of the indexed fields to retrieve the data. It's as if the Essbase/Rule file is ognoring the index, or I have a config issue with the ODBC settings on the server that is causing the problem.
    ODBC.INI file entry for the Oracle server as follows (changed any sensitive info to xxx or 999).
    [XXX]
    Driver=/opt/data01/hyperion/common/ODBC-64/Merant/5.2/lib/ARora22.so
    Description=DataDirect 5.2 Oracle Wire Protocol
    AlternateServers=
    ApplicationUsingThreads=1
    ArraySize=60000
    CachedCursorLimit=32
    CachedDescLimit=0
    CatalogIncludesSynonyms=1
    CatalogOptions=0
    ConnectionRetryCount=0
    ConnectionRetryDelay=3
    DefaultLongDataBuffLen=1024
    DescribeAtPrepare=0
    EnableDescribeParam=0
    EnableNcharSupport=0
    EnableScrollableCursors=1
    EnableStaticCursorsForLongData=0
    EnableTimestampWithTimeZone=0
    HostName=999.999.999.999
    LoadBalancing=0
    LocalTimeZoneOffset=
    LockTimeOut=-1
    LogonID=xxx
    Password=xxx
    PortNumber=1521
    ProcedureRetResults=0
    ReportCodePageConversionErrors=0
    ServiceType=0
    ServiceName=xxx
    SID=
    TimeEscapeMapping=0
    UseCurrentSchema=1
    Can anyone please advise on this lack of performance.
    Thanks in advance
    Bagpuss

    One other thing that I've seen is that if your Oracle data source and Essbase server are in different geographic locations, you can get some delay when it retrieves data over the WAN. I guess there is some handshaking going on when passing the data from Oracle to Essbase (either by record or groups of records) that is slowed WAY down over the WAN.
    Our solution to this was remove teh query out of the load rule, run it via SQL+ on a command line at the geographic location where the Oracle database is, then ftp the resulting file to where the Essbase server is.
    With upwards of 6 million records being retrieved, it took around 4 hours in the load rule, but running the query via command line took 10 minutes, then the ftp took less than 5.

  • Syntax error in NQSConfig.ini.file

    # NQSConfig.INI
    # Copyright (c) 1997-2006 Oracle Corporation, All rights reserved
    # INI file parser rules are:
    # If values are in literals, digits or _, they can be
    # given as such. If values contain characters other than
    # literals, digits or _, values must be given in quotes.
    # Repository Section
    # Repositories are defined as logical repository name - file name
    # pairs. ODBC drivers use logical repository name defined in this
    # section.
    # All repositories must reside in OracleBI\server\Repository
    # directory, where OracleBI is the directory in which the Oracle BI
    # Server software is installed.
    [ REPOSITORY ]
    Star     =     OracleBIAnalyticsApps.rpd, DEFAULT
    # Query Result Cache Section
    [ CACHE ]
    ENABLE     =     NO;
    // A comma separated list of <directory maxSize> pair(s)
    // e.g. DATA_STORAGE_PATHS = "d:\OracleBIData\nQSCache" 500 MB;
    DATA_STORAGE_PATHS     =     "C:\OracleBIData\cache" 500 MB;
    MAX_ROWS_PER_CACHE_ENTRY = 100000; // 0 is unlimited size
    MAX_CACHE_ENTRY_SIZE = 1 MB;
    MAX_CACHE_ENTRIES = 1000;
    POPULATE_AGGREGATE_ROLLUP_HITS = NO;
    USE_ADVANCED_HIT_DETECTION = NO;
    MAX_SUBEXPR_SEARCH_DEPTH = 7;
    // Cluster-aware cache
    // GLOBAL_CACHE_STORAGE_PATH = "<directory name>" SIZE;
    // MAX_GLOBAL_CACHE_ENTRIES = 1000;
    // CACHE_POLL_SECONDS = 300;
    // CLUSTER_AWARE_CACHE_LOGGING = NO;
    # General Section
    # Contains general server default parameters, including localization
    # and internationalization, temporary space and memory allocation,
    # and other default parameters used to determine how data is returned
    # from the server to a client.
    [ GENERAL ]
    // Localization/Internationalization parameters.
    LOCALE     =     "English-usa";
    SORT_ORDER_LOCALE     =     "English-usa";
    SORT_TYPE = "binary";
    // Case sensitivity should be set to match the remote
    // target database.
    CASE_SENSITIVE_CHARACTER_COMPARISON = OFF ;
    // SQLServer65 sorts nulls first, whereas Oracle sorts
    // nulls last. This ini file property should conform to
    // that of the remote target database, if there is a
    // single remote database. Otherwise, choose the order
    // that matches the predominant database (i.e. on the
    // basis of data volume, frequency of access, sort
    // performance, network bandwidth).
    NULL_VALUES_SORT_FIRST = OFF;
    DATE_TIME_DISPLAY_FORMAT = "yyyy/mm/dd hh:mi:ss" ;
    DATE_DISPLAY_FORMAT = "yyyy/mm/dd" ;
    TIME_DISPLAY_FORMAT = "hh:mi:ss" ;
    // Temporary space, memory, and resource allocation
    // parameters.
    // You may use KB, MB for memory size.
    WORK_DIRECTORY_PATHS     =     "C:\OracleBIData\tmp";
    SORT_MEMORY_SIZE = 4 MB ;
    SORT_BUFFER_INCREMENT_SIZE = 256 KB ;
    VIRTUAL_TABLE_PAGE_SIZE = 128 KB ;
    // Analytics Server will return all month and day names as three
    // letter abbreviations (e.g., "Jan", "Feb", "Sat", "Sun").
    // To use complete names, set the following values to YES.
    USE_LONG_MONTH_NAMES = NO;
    USE_LONG_DAY_NAMES = NO;
    UPPERCASE_USERNAME_FOR_INITBLOCK = NO ; // default is no
    // Aggregate Persistence defaults
    // The prefix must be between 1 and 8 characters long
    // and should not have any special characters ('_' is allowed).
    AGGREGATE_PREFIX = "SA_" ;
    # Security Section
    # Legal value for DEFAULT_PRIVILEGES are:
    # NONE READ
    [ SECURITY ]
    DEFAULT_PRIVILEGES = READ;
    PROJECT_INACCESSIBLE_COLUMN_AS_NULL     =     NO;
    MINIMUM_PASSWORD_LENGTH     =     0;
    #IGNORE_LDAP_PWD_EXPIRY_WARNING = NO; // default is no.
    #SSL=NO;
    #SSL_CERTIFICATE_FILE="servercert.pem";
    #SSL_PRIVATE_KEY_FILE="serverkey.pem";
    #SSL_PK_PASSPHRASE_FILE="serverpwd.txt";
    #SSL_PK_PASSPHRASE_PROGRAM="sitepwd.exe";
    #SSL_VERIFY_PEER=NO;
    #SSL_CA_CERTIFICATE_DIR="CACertDIR";
    #SSL_CA_CERTIFICATE_FILE="CACertFile";
    #SSL_TRUSTED_PEER_DNS="";
    #SSL_CERT_VERIFICATION_DEPTH=9;
    #SSL_CIPHER_LIST="";
    # There are 3 types of authentication. The default is NQS
    # You can select only one of them
    #----- 1 -----
    #AUTHENTICATION_TYPE = NQS; // optional and default
    #----- 2 -----
    #AUTHENTICATION_TYPE = DATABASE;
    # [ DATABASE ]
    # DATABASE = "some_data_base";
    #----- 3 -----
    #AUTHENTICATION_TYPE = BYPASS_NQS;
    # Server Section
    [ SERVER ]
    SERVER_NAME = Oracle_BI_Server ;
    READ_ONLY_MODE = NO;     // default is "NO". That is, repositories can be edited online.
    MAX_SESSION_LIMIT = 2000 ;
    MAX_REQUEST_PER_SESSION_LIMIT = 500 ;
    SERVER_THREAD_RANGE = 40-100;
    SERVER_THREAD_STACK_SIZE = 0; // default is 256 KB, 0 for default
    DB_GATEWAY_THREAD_RANGE = 40-200;
    DB_GATEWAY_THREAD_STACK_SIZE = 0; // default is 256 KB, 0 for default
    MAX_EXPANDED_SUBQUERY_PREDICATES = 8192; // default is 8192
    MAX_QUERY_PLAN_CACHE_ENTRIES = 1024; // default is 1024
    MAX_DRILLDOWN_INFO_CACHE_ENTRIES = 1024; // default is 1024
    MAX_DRILLDOWN_QUERY_CACHE_ENTRIES = 1024; // default is 1024
    INIT_BLOCK_CACHE_ENTRIES = 20; // default is 20
    CLIENT_MGMT_THREADS_MAX = 5; // default is 5
    # The port number specified with RPC_SERVICE_OR_PORT will NOT be considered if
    # a port number is specified in SERVER_HOSTNAME_OR_IP_ADDRESSES.
    RPC_SERVICE_OR_PORT = 9703; // default is 9703
    # If port is not specified with a host name or IP in the following option, the port
    # number specified at RPC_SERVICE_OR_PORT will be considered.
    # When port number is specified, it will override the one specified with
    # RPC_SERVICE_OR_PORT.
    SERVER_HOSTNAME_OR_IP_ADDRESSES = "ALLNICS"; # Example: "hostname" or "hostname":port
    # or "IP1","IP2":port or
    # "hostname":port,"IP":port2.
    # Note: When this option is active,
    # CLUSTER_PARTICIPANT should be set to NO.
    ENABLE_DB_HINTS = YES; // default is yes
    PREVENT_DIVIDE_BY_ZERO = YES;
    CLUSTER_PARTICIPANT = NO; # If this is set to "YES", comment out
    # SERVER_HOSTNAME_OR_IP_ADDRESSES. No specific NIC support
    # for the cluster participant yet.
    // Following required if CLUSTER_PARTICIPANT = YES
    #REPOSITORY_PUBLISHING_DIRECTORY = "<dirname>";
    #REQUIRE_PUBLISHING_DIRECTORY = YES; // Don't join cluster if directory not accessible
    DISCONNECTED = NO;
    AUTOMATIC_RESTART = YES;
    # Dynamic Library Section
    # The dynamic libraries specified in this section
    # are categorized by the CLI they support.
    [ DB_DYNAMIC_LIBRARY ]
    ODBC200 = nqsdbgatewayodbc;
    ODBC350 = nqsdbgatewayodbc35;
    OCI7 = nqsdbgatewayoci7;
    OCI8 = nqsdbgatewayoci8;
    OCI8i = nqsdbgatewayoci8i;
    OCI10g = nqsdbgatewayoci10g;
    DB2CLI = nqsdbgatewaydb2cli;
    DB2CLI35 = nqsdbgatewaydb2cli35;
    NQSXML = nqsdbgatewayxml;
    XMLA = nqsdbgatewayxmla;
    ESSBASE = nqsdbgatewayessbasecapi;
    # User Log Section
    # The user log NQQuery.log is kept in the server\log directory. It logs
    # activity about queries when enabled for a user. Entries can be
    # viewed using a text editor or the nQLogViewer executable.
    [ USER_LOG ]
    USER_LOG_FILE_SIZE = 10 MB; // default size
    CODE_PAGE = "UTF8"; // ANSI, UTF8, 1252, etc.
    # Usage Tracking Section
    # Collect usage statistics on each logical query submitted to the
    # server.
    [ USAGE_TRACKING ]
    ENABLE = YES;
    //==============================================================================
    // Parameters used for writing data to a flat file (i.e. DIRECT_INSERT = NO).
    STORAGE_DIRECTORY = "<full directory path>";
    CHECKPOINT_INTERVAL_MINUTES = 5;
    FILE_ROLLOVER_INTERVAL_MINUTES = 30;
    CODE_PAGE = "ANSI"; // ANSI, UTF8, 1252, etc.
    //==============================================================================
    DIRECT_INSERT = YES;
    //==============================================================================
    // Parameters used for inserting data into a table (i.e. DIRECT_INSERT = YES).
    PHYSICAL_TABLE_NAME = "OBI Usage Tracking"."Catalog"."dbo"."S_NQ_ACCT" ; // Or "<Database>"."<Schema>"."<Table>" ;
    CONNECTION_POOL = "OBI Usage Tracking"."Usage Tracking Writer Connection Pool>" ;
    BUFFER_SIZE = 10 MB ;
    BUFFER_TIME_LIMIT_SECONDS = 5 ;
    NUM_INSERT_THREADS = 5 ;
    MAX_INSERTS_PER_TRANSACTION = 1 ;
    //==============================================================================
    # Query Optimization Flags
    [ OPTIMIZATION_FLAGS ]
    STRONG_DATETIME_TYPE_CHECKING = ON ;
    # CubeViews Section
    [ CUBE_VIEWS ]
    DISTINCT_COUNT_SUPPORTED = NO ;
    STATISTICAL_FUNCTIONS_SUPPORTED = NO ;
    USE_SCHEMA_NAME = YES ;
    USE_SCHEMA_NAME_FROM_RPD = YES ;
    DEFAULT_SCHEMA_NAME = "ORACLE";
    CUBE_VIEWS_SCHEMA_NAME = "ORACLE";
    LOG_FAILURES = YES ;
    LOG_SUCCESS = NO ;
    LOG_FILE_NAME     =     "C:\OracleBI\server\Log\CubeViews.Log";
    # MDX Member Name Cache Section
    # Cache subsystem for mapping between unique name and caption of
    # members for all SAP/BW cubes in the repository.
    [ MDX_MEMBER_CACHE ]
    // The entry to indicate if the feature is enabled or not, by default it is NO since this only applies to SAP/BW cubes
    ENABLE = NO ;
    // The path to the location where cache will be persisted, only applied to a single location,
    // the number at the end indicates the capacity of the storage. When the feature is enabled,
    // administrator needs to replace the "<full directory path>" with a valid path,
    // e.g. DATA_STORAGE_PATH = "C:\OracleBI\server\Data\Temp\Cache" 500 MB ;
    DATA_STORAGE_PATH     =     "C:\OracleBIData\cache" 500 MB;
    // Maximum disk space allowed for each user;
    MAX_SIZE_PER_USER = 100 MB ;
    // Maximum number of members in a level will be able to be persisted to disk
    MAX_MEMBER_PER_LEVEL = 1000 ;
    // Maximum size for each individual cache entry size
    MAX_CACHE_SIZE = 100 MB ;
    # Oracle Dimension Export Section
    [ ORA_DIM_EXPORT ]
    USE_SCHEMA_NAME_FROM_RPD = YES ; # NO
    DEFAULT_SCHEMA_NAME = "ORACLE";
    ORA_DIM_SCHEMA_NAME = "ORACLE";
    LOGGING = ON ; # OFF, DEBUG
    LOG_FILE_NAME     =     "C:\OracleBI\server\Log\OraDimExp.Log";
    Help me out Gurus thanks in advance

    Hi,
    Star = OracleBIAnalyticsApps.rpd, DEFAULTthis should be end with semicolon
    Star = OracleBIAnalyticsApps.rpd, DEFAULT;
    Assign Points and close thread, if your question is answered...
    Cheers,
    Aravind

  • Issue with package being distributed to DP

    Hi,
    We’re having an issue with a particular DP on trying to re-distribute a driver package to. The driver package was updated and re-distributed to around 80 DP’s. The package had been successfully sent to all except one.
    I have tried re-distributing packages to this DP and this failure and other packages work fine, so the issue lies with this particular package on this particular DP.
    I have run through the steps within this post but still no joy:
    https://social.technet.microsoft.com/Forums/en-US/8328f9ce-d290-4aba-8187-670080464476/sccm-2012-how-to-delete-packages-from-wmi
    I have made sure the package doesn’t exist on the DP by taking it out of root\sccmdp within WMI and also removing the ini file from the PkgLib folder, as per above post. But on trying to distribute to the DP via the admin console I get Error: The SMS Provider
    reported an error.
    There are no other details for the error and looking through various logs so no further indication for the error.
    Interestingly the package is being reported by the admin console as not being on the DP, so re-distribution is not an option. I can only use the distribute content option, even though it had been on this DP. This error is reported at the end of the distribute
    content wizard.
    Also we tried using content explorer from the Config Manager toolkit before running through the above postl, and the package was shown within content explorer but it was greyed out. The options to validate where also greyed out. On running the above removal
    the package was no longer displayed within content explorer. But running through distribute content via the admin console still shows the same error message.
    We then tried pre-staging the content manually on the DP. On using content explorer after this it still shows the package as greyed out.
    An overnight validation doesn’t report any errors for package content either.
    Another curious note is on running smsdpmon manually on the DP for the package, it reports the package as being successfully verified within the smsdpmon.log. This message is also reported back to the MP and can be seen within the Status Message Viewer.
    But this message is not displayed within Monitoring -> Distribution Point Configuration Status.
    We tried quite a few avenues but still no joy with this package. Could any offer any suggestions to help fix this issue?
    Thanks
    James

    It was originally but the driver package needed to be updated. Once updated the package was re-distributed to all sites, but failed on one out of 80. All DP's are more or less configured the same, with the same AV protection.
    The problem DP takes other packages OK. SO the thought was to try to remove the existance of the package from the problem DP. This is where we are now. Hope that is a bit clearer?

  • Issues with RPD Version Number - Uploading OLD RPD

    Hi Gurus,
    I am facing a issue in OBIEE 11G from last few days.
    Whenever I upload a RPD,the EM shows that it has incremented the veersion number of my RPD and it has been deployed. But after checking the asnwers we and opening the RPD in online mode we still find that a back dated(old) RPD has been uploaded.
    Unless and untill we rename the RPD and upload it the changes are not reflected. Can you please help resolve this issue?
    Thanks

    Hi, I had this issue and was not able to resolve it. I had to do a re-install of OBIEE 1.1.1.7 . I'm curious to know if anyone has the same problem and was able to fix it another way, so I'm going to list my observations below:
    EM did increment my repository.RPD to repository_002.rpd but repository_001.rpd was still active.
    Mbeans showed that the "current" RPD was 002, but NQSConfig.ini showed that "Star = repository_001"
    When clicking "Apply" under EM Deployment screen and restarting, the version number incremented, but NQSConfig.ini did not update
    "Restart All" in EM appeared to be unusually fast.
    When restarting OBIEE OMPN components in EM, the components always show ALIVE under opmnctl in the console window
    oracle.as.managment.mbeans.opmn mbean was not available in the system mbean browser, which I believe means that the OPMN instance was some how unregistered
    I re-registered the instance, which completely wiped the entire instance1 directory clean (I do not recommend doing this unless you back everything up outside the instance)
    Re-registering the instance worked temporarily, but then broke again later that week

  • NQSConfig.ini

    Hi all,
    I was doing this repository and catalog migration from Dev to Prod. My version is 11g. So, I did this from Weblogic Enterprise Manager usning Deployment option. Now, I notice that the user cannot see My Folders in Analysis. And moreover I noticed, the NQSConfig.ini file did not replace the repository name with the latest one, because I changed the repository name in Prod before uploading. Please assist. What is going wrong?
    Any help is appreciated.
    Thanks.

    Hi Lakshmipathi, thanks for your prompt response. On Prod, I checked the NQSConfig.ini file to check which repository it is point to. I copied the same repository across environments. Renamed it on Dev. And then uploaded it along with the catalog. I restarted the server as well. But the NQSConfig.ini seems the same with the old repository name. Please let me know if you need to know any other info...

  • Rtorrent: issue with DL large files ( 4GB) to NTFS

    Using latest rtorrent/rutorrent:  every time I DL a large >4GB file with rtorrent to the NTFS drive it shows it downloading the whole file MB by MB, but when I go to hash check (via rutorrent), there's only a partial percentage DLded.  Say if I DL a 4.36 GB .mkv file, I hash check and only 10% is done ~400MB or about 6 minutes of the video.
    Oddly:
    If I do ls -l --block-size=MB, the file shows normal 4GB+ size.
    If I do ls -s, file appears to be only a few hundred MB.
    If I DL to my root ext4 drive, there's no issue unless I change the save path of the torrent in rutorrent and elect for the files to be moved to the NTFS drive.
    I've transferred large files with 'cp' from another NTFS to this NTFS with no issue.
    I thought the problem was rutorrent plugin autotools, but I removed it from my plugins folder and the problem persists.
    Permissions:
    I have all the relevant directories in /etc/php.ini open_basedir:  the user/session, the mounted drive, and /srv/http/rutorrent
    I did #chown -R http:http /srv/http/rutorrent
    http is a member of the group with NTFS drive access
    the rutorrent/tmp directory is changed to be within /srv/http/rutorrent
    This is a pesky issue that I didn't have with my last arch install using the same general set up.
    I DL to an NTFS formatted drive and mount it the same way I did before: ntfs-3g defaults,auto,uid=XXXX,gid=XXXX,dmask=027,fmask=037
    My rtorrent user is the uid (owner) and is in the group that has access to the drive (along with my audio server user and http)
    I run rtorrent in screen as the rtorrent user
    I imagine this is an issue with rutorrent?
    Any tips before I reformat the whole 4TB to ext4?
    EDIT:  the issue is definitely isolated to rtorrent.  I manually added large size torrent using rtorrent, it completed.  I then hash checked (in rtorrent) and again only ~10% was shown as complete.
    EDIT2:  It is most definitely not a permissions issue.  Tried this again without mount permissions options and the same thing happens.
    Last edited by beerhoof (2015-01-30 22:05:57)

    I'm afraid I don't understand the question.
    7.2 now correctly parses the Canon XF .CIF sidecar files to determine whether the media is supposed to be spanned or not.  This has been a feature request that has been finally addressed to work correctly.
    (It also was there in 7.1 & previous, but had limitations:  the performance wasn't as good, there had been issues in the past with audio pops at cut points, and it required that the Canon XF folder structure remain intact, ie if you copied the media to a flattened folder structure, it would fail to do the spanning correctly.)
    If you are looking for a means to disable the automatic spanning, simply removing the .CIF files will achieve that.  Although i'm not sure I understand why you're looking to do that.  Most people *want* spanning to happen automatically, otherwise you're forced to manually sync spanned media segments by hand. 

  • Clustering issues with 4.5.1

    We are using Weblogic 4.5.1 with sp7 and have heard that there are
              issues with Weblogic clustering. Specifically, the IIS plugin having
              memory leaks, and 4.5.1 with sp7 has issues when implementing clustering
              (the app servers in the cluster sometimes do not respond and sometimes
              contend for who owns the next request coming from the proxy). I've also
              seen that when a HTTP 1.1 request comes to the Web server, the proxy
              sends a HTTP 1.0 request to the App server. I've also heard that sp10,
              due out in July will fix these issues. Can anyone verify this?
              Also, I want to make sure that our proposed configuration can work in a
              clustered environment. I have two IIS Web servers, WebA and WebB, and
              also two Application servers running 4.5.1 sp7, AppA and AppB. Both Web
              servers are identical and both App servers are identical. Our Weblogic
              ListenPort is 7005
              If I set up WebA's iisproxy.ini file to contain the line
              "WebLogicCluster=AppA:7005,AppB:7005"
              I then copy WebA's iisproxy.ini file to WebB so that the two
              iisproxy.ini files are identical.
              The App servers have an identical configuration and point to 237.0.0.1
              as their multicast IP address.
              Is this configuration ok? Can two web servers point to the same two App
              servers in the cluster, or does each Web server need it's own "cluster"
              of App servers.
              Thanks in advance...
              

    What was that issue with deadlock and where is it described?
              Prasad Peddada wrote:
              > Also there was a dead lock issue with sp7 which was fixed in the later service
              > packs.
              >
              > Vinod Mehra wrote:
              >
              > > >> Specifically, the IIS plugin having memory leaks,
              > > These have already been fixed with SP7.
              > >
              > > >>and 4.5.1 with sp7 has issues when implementing clustering
              > > >>(the app servers in the cluster sometimes do not respond and sometimes
              > > >>contend for who owns the next request coming from the proxy).
              > >
              > > WebLogic server not responding can be because of many reasons. Most of the
              > > time it is because of bad configurations and sometimes because of
              > > Application
              > > problems itself. Now about recovering from such a hung server SP8 onwards
              > > the plugin-ins have a configurable parameter "HungServerRecoverySecs",
              > > using which the plug-ins mark that server as bad (temporarily) and
              > > the requets failover to the SECONDARY.
              > >
              > > >> I've also seen that when a HTTP 1.1 request comes to the Web server, the
              > > proxy
              > > >>sends a HTTP 1.0 request to the App server.
              > >
              > > 4.5.1 does noes not support HTTP1.1 yet. So they will still be HTTP1.0.
              > >
              > > >> I've also heard that sp10, due out in July will fix these issues. Can
              > > anyone verify this?
              > > Yes. Except the last one.
              > >
              > > >> Is this configuration ok?
              > > Yes the configuration you have described is a valid one.
              > >
              > > --Vinod.
              > >
              > > lynch wrote:
              > >
              > > > We are using Weblogic 4.5.1 with sp7 and have heard that there are
              > > > issues with Weblogic clustering. Specifically, the IIS plugin having
              > > > memory leaks, and 4.5.1 with sp7 has issues when implementing clustering
              > > > (the app servers in the cluster sometimes do not respond and sometimes
              > > > contend for who owns the next request coming from the proxy). I've also
              > > > seen that when a HTTP 1.1 request comes to the Web server, the proxy
              > > > sends a HTTP 1.0 request to the App server. I've also heard that sp10,
              > > > due out in July will fix these issues. Can anyone verify this?
              > > >
              > > > Also, I want to make sure that our proposed configuration can work in a
              > > > clustered environment. I have two IIS Web servers, WebA and WebB, and
              > > > also two Application servers running 4.5.1 sp7, AppA and AppB. Both Web
              > > > servers are identical and both App servers are identical. Our Weblogic
              > > > ListenPort is 7005
              > > >
              > > > If I set up WebA's iisproxy.ini file to contain the line
              > > > "WebLogicCluster=AppA:7005,AppB:7005"
              > > >
              > > > I then copy WebA's iisproxy.ini file to WebB so that the two
              > > > iisproxy.ini files are identical.
              > > >
              > > > The App servers have an identical configuration and point to 237.0.0.1
              > > > as their multicast IP address.
              > > >
              > > > Is this configuration ok? Can two web servers point to the same two App
              > > > servers in the cluster, or does each Web server need it's own "cluster"
              > > > of App servers.
              > > >
              > > > Thanks in advance...
              

  • Issue with logging into Presentation Services

    No luck fixing my issue using the search function.
    Administrator/Administrator or a blank password are not working. In NQSConfig.INI I set AUTHENTICATION_TYPE = BYPASS_NQS; but still cannot get into presentation services. Where do define/set the password here?
    I tried to uninstall/reinstall and get prompted for the oc4jadmin password which i also do not have.
    Edited by: cisGuy on Aug 16, 2010 10:12 AM

    Yes, i was then able to uninstall OBIEE and reinstall by resetting the oc4jadmin password. However upon installation and before reboot, i was able to log into Presentation Services as Administrator/Administrator. However once i rebooted and started all services, i can get to presentation services login screen but the same credentials give me an invalid user/pass error message.
    What is going on here? All services are running:
    Oracle Java Host
    Oracle Presentation Services
    Oracle BI Server

  • NQSconfig.ini file multiple rpd

    Hi
    I am using 11g linux box.I set 2 rpd file in my NQSConfig.ini file like
    [Repository]
    RPD1 = ACDNEW_BI0010.rpd, DEFAULT;
    RPD2 = ACDNEW_BI0009.rpd;
    I have one BI server it running 9703 port.
    Now how can i connect 2 rpd at same time in my online mode.
    Thankd
    Gram

    Hi,
    No more this feature/option in obiee11g (it's working only obiee10g) ..for running multiple RPD in same server u should go with obiee11g installation with multiple instance (try to user instance 2 as custom port)
    For more refer
    http://docs.oracle.com/cd/E21764_01/bi.1111/e10539/c2_scenarios.htm#CHDDIDGE
    Thanks
    Deva

  • Rpd set in NQSConfig.INI before server starts is overwritten while startup

    hi gurus
    I set a Abc.rpd in NQSConfig.INI before i startup server but after startup SampleAppLite.rpd is shown in Analysis/NQSConfig.INI :(
    any inputs will help?
    thank you...

    thanks for your inputs....
    my bad....I should've spelled the details better....
    after trying the EM->deployment route, despite showing the Name_BI0001.rpd in EM screen, after bouncing services, Analysis failed to pick up my desired rpd but showed the default samplelite.rpd...
    so I thought of doing it differently , as in edit the .ini file to see if it picks up but as i said it failed to read from .ini instead overwrote the .ini with ssamplelite.rpd

Maybe you are looking for

  • Unable to access files with spaces in the path

    Hi all, I am accessing files which reside on the server using a java applet. The typical path of a file could be: http://192.168.0.2/agat/data/AF190701 0003.agt (not the space in the filename). When I try to open this file using a URL object: URL fil

  • How to cpoy a window from one smartform to another smartform

    Hi experts, can u please help me . how to copy one window from one smartform to another smartform.. Thanks Sai

  • Can read floppy but not format

    So get this... For whatever reason I suddenly cannot format a floppy disk. I can read some old disks that I have but anytime I try to write to one, it says that the operations cannot be performed. The exact message when formatting (DOS mode) is: "Inv

  • Breaking BIG XML files in to 4 different XML Files

    Hi: What will be a problem if I break the BIG XML file in to a number of different XML FILES? My main reason is to create XMLVIEW. Please help ALI_2

  • Transfer Keyword doesn't getting all columns for BSEG

    Hi Guys,               When i am downloading the data from SAP into Application server.I am getting all the columns into my Charatcer type final Internal table.when i am using Open data set for opening th file in Application server and using transfer