Low Catalog Cache Hitrate
Hi,
Currently, our APO production system has a low catalog cache hitrate of 80%. This hitrate rarely climbs up to a value greater than 85%.
Can anybody help me in identifying the factors that affect this? How can we improve this?
Below are current statistics and parameters in the system:
Cache Sizes
Size in KB Size in pages
I/O Buffer Cache 3,353,808 419,226
Data Cache 3,351,552 418,944
Converter 1,952 244
Other 304 38
Catalog Cache 35,328 4,416
Sequence Cache 8 1
Cache Accesses
Accesses Successful Unsuccessful Hit Rate
Entire Data Cache 210,815,372 210,739,279 76,093 99.96%
History/Undo 20,076,913 20,076,913 0 100.00%
OMS Data 158,934,232 158,861,439 72,793 99.95%
SQL Data 31,804,227 31,800,927 3,300 99.99%
Catalog Cache 20,212,459 16,282,442 3,928,363 80.56%
Sequence Cache 0 0 0 100.00%
Data Cache Usage
Usage in KB Usage in Pages Occup.
Entire Data Cache 1,386,752 173,344 41.38%
SQL Data 30,432 3,804 0.91%
OMS Data 722,328 90,291 21.55%
History/Undo 633,816 79,227 18.91%
Cache-Specific Parameter Settings
CACHE_SIZE = 419226
CAT_CACHE_SUPPLY = 4416
Let me know what your comments and suggestions.
Thanks in advance.
Best regards,
Kris Caldoza
Thanks for the analysis Lars.
By how much should we try to increase the CAT_CACHE_SUPPLY value? What else can I monitor in the system that will help me analyze the statistics and probably help me come up with an appropriate value for CAT_CACHE_SUPPLY?
Below are the current parameter values in the system:
Parameter / Time New Value
|-- ALLOW_MULTIPLE_SERVERTASK_UKTS NO
|-- AUTO_RECREATE_BAD_INDEXES NO
|-- BACKUP_BLOCK_CNT 8
|-- CACHE_IN_SHARED_MEMORY NO
|-- CACHE_SIZE 419226
|-- CAT_CACHE_SUPPLY 4416
|-- CHECK_TABLE_WIDTH NO
|-- CONTROLUSERID CONTROL
|-- CONVERTER_REGIONS 8
|-- DATA_VOLUME_GROUPS 1
|-- DATA_VOLUME_NAME_0001 /sapdb/LCP/sapdata/DISKD0001
|-- DATA_VOLUME_NAME_0002 /sapdb/LCP/sapdata/DISKD0002
|-- DATA_VOLUME_SIZE_0001 1024000
|-- DATA_VOLUME_SIZE_0002 512000
|-- DATA_VOLUME_TYPE_0001 L
|-- DATA_VOLUME_TYPE_0002 L
|-- DATE_TIME_FORMAT INTERNAL
|-- DEADLOCK_DETECTION 0
|-- DEFAULT_CODE ASCII
|-- DIAG_HISTORY_NUM 2
|-- DIAG_HISTORY_PATH /sapdb/data/wrk/LCP/DIAGHISTORY
|-- EXPAND_COM_TRACE NO
|-- EXTERNAL_DUMP_REQUEST NO
|-- FBM_VOLUME_BALANCE 10
|-- FBM_VOLUME_COMPRESSION 50
|-- FORBID_LOAD_BALANCING NO
|-- FORMATTING_MODE PARALLEL
|-- FORMAT_DATAVOLUME YES
|-- HASHED_RESULTSET NO
|-- HASHED_RESULTSET_CACHESIZE 262144
|-- HEAP_CHECK_LEVEL 0
|-- HS_STORAGE_DLL libhsscopy
|-- HS_SYNC_INTERVAL 50
|-- INIT_ALLOCATORSIZE 262144
|-- INSTANCE_TYPE LVC
|-- JOIN_MAXTAB_LEVEL4 64
|-- JOIN_MAXTAB_LEVEL9 5
|-- JOIN_OPERATOR_IMPLEMENTATION NO
|-- JOIN_SEARCH_LEVEL 0
|-- JOIN_TABLEBUFFER 128
|-- KERNELDIAGSIZE 800
|-- KERNELTRACESIZE 1197
|-- KERNELVERSION KERNEL 7.5.0 BUILD 035-123-123-944
|-- LOAD_BALANCING_CHK 0
|-- LOAD_BALANCING_DIF 10
|-- LOAD_BALANCING_EQ 5
|-- LOCAL_REDO_LOG_BUFFER_SIZE 0
|-- LOG_BACKUP_TO_PIPE NO
|-- LOG_IO_QUEUE 66
|-- LOG_MIRRORED NO
|-- LOG_SEGMENT_SIZE 170666
|-- LOG_VOLUME_NAME_001 /sapdb/LCP/saplog/DISKL001
|-- LOG_VOLUME_SIZE_001 512000
|-- LOG_VOLUME_TYPE_001 F
|-- LRU_FOR_SCAN NO
|-- MAXBACKUPDEVS 2
|-- MAXCPU 16
|-- MAXDATAVOLUMES 12
|-- MAXLOCKS 300000
|-- MAXLOGVOLUMES 2
|-- MAXPAGER 64
|-- MAXRGN_REQUEST 3000
|-- MAXSERVERTASKS 22
|-- MAXUSERTASKS 68
|-- MAXVOLUMES 15
|-- MAX_HASHTABLE_MEMORY 5120
|-- MAX_RETENTION_TIME 480
|-- MAX_SERVERTASK_STACK 100
|-- MAX_SINGLE_HASHTABLE_SIZE 512
|-- MAX_SPECIALTASK_STACK 100
|-- MCOD NO
|-- MEMORY_ALLOCATION_LIMIT 0
|-- MIN_RETENTION_TIME 60
|-- MP_RGN_LOOP 100
|-- OMS_HEAP_BLOCKSIZE 10000
|-- OMS_HEAP_COUNT 16
|-- OMS_HEAP_LIMIT 4192256
|-- OMS_HEAP_THRESHOLD 100
|-- OMS_STREAM_TIMEOUT 30
|-- OMS_VERS_THRESHOLD 2097152
|-- OPMSG1 /dev/console
|-- OPMSG2 /dev/null
|-- OPTIMIZE_FETCH_REVERSE YES
|-- OPTIMIZE_JOIN_OPERATOR_SORT YES
|-- OPTIMIZE_JOIN_OUTER YES
|-- OPTIMIZE_JOIN_PARALLEL_SERVERS 0
|-- OPTIMIZE_OPERATOR_JOIN_COSTFUNC YES
|-- OPTIM_CACHE NO
|-- OPTIM_FIRST_ROWS NO
|-- OPTIM_INV_ONLY YES
|-- OPTIM_JOIN_FETCH 0
|-- OPTIM_MAX_MERGE 500
|-- PACKET_SIZE 131072
|-- PREALLOCATE_IOWORKER NO
|-- REQUEST_TIMEOUT 180
|-- RUNDIRECTORY /sapdb/data/wrk/LCP
|-- SEQUENCE_CACHE 1
|-- SESSION_TIMEOUT 900
|-- SET_VOLUME_LOCK YES
|-- SHAREDSQL NO
|-- SHAREDSQL_COMMANDCACHESIZE 32768
|-- SHAREDSQL_EXPECTEDSTATEMENTCOUNT 1500
|-- SHOW_MAX_STACK_USE NO
|-- SIMULATE_VECTORIO NEVER
|-- SUPPRESS_CORE YES
|-- SYMBOL_DEMANGLING NO
|-- SYMBOL_RESOLUTION YES
|-- TRACE_PAGES_BUP 0
|-- TRACE_PAGES_EV 2
|-- TRACE_PAGES_GC 20
|-- TRACE_PAGES_LW 5
|-- TRACE_PAGES_PG 3
|-- TRACE_PAGES_SV 5
|-- TRACE_PAGES_TI 2
|-- TRACE_PAGES_US 10
|-- TRACE_PAGES_UT 5
|-- UPDATESTAT_SAMPLE_ALGO 0
|-- USE_COROUTINES NO
|-- USE_OPEN_DIRECT NO
|-- USE_STACK_ON_STACK NO
|-- USE_SYSTEM_PAGE_CACHE YES
|-- USE_UCONTEXT YES
|-- UTILITY_PROTSIZE 100
|-- VOLUMENO_BIT_COUNT 8
|-- XP_CONVERTER_REGIONS 0
|-- XP_DATA_CACHE_RGNS 0
|-- XP_MAXPAGER 0
|-- XP_MP_RGN_LOOP 0
|-- AKDUMP_ALLOWED YES
|-- BACKUPHISTFILE dbm.knl
|-- BACKUPMED_DEF dbm.mdf
|-- CATCACHE_MINSIZE 262144
|-- DATACACHE_RGNS 64
|-- DELAYCOMMIT NO
|-- DELAYLOGWRITER 0
|-- DIAGSEM 0
|-- DWIO_AREA_FLUSH 50
|-- DWIO_AREA_SIZE 50
|-- DWLRU_TAIL_FLUSH 25
|-- DYNTASK_STACK NO
|-- _EVENTFILE knldiag.evt
|-- _EVENTSIZE 0
|-- EVENTALIVE_CYCLE 0
|-- FBMLOW_IO_RATE 10
|-- IDXFILELIST_SIZE 2048
|-- IOPROCSFOR_PRIO 0
|-- IOPROCSFOR_READER 0
|-- IOPROCSPER_DEV 2
|-- IOPROCSSWITCH 2
|-- _KERNELDIAGFILE knldiag
|-- _KERNELDUMPFILE knldump
|-- _KERNELTRACEFILE knltrace
|-- LOCKSUPPLY_BLOCK 100
|-- _MAXEVENTS 100
|-- _MAXEVENTTASKS 1
|-- MAXGARBAGECOLL 10
|-- MAXTASKSTACK 1500
|-- _MAXTRANS 368
|-- MAXMESSAGE_FILES 69
|-- MBLOCKDATA_SIZE 32768
|-- MBLOCKQUAL_SIZE 16384
|-- MBLOCKSTACK_SIZE 32768
|-- MBLOCKSTRAT_SIZE 8192
|-- MINREPLYSIZE 4096
|-- MINSERVER_DESC 17
|-- MPDISP_LOOPS 2
|-- MPDISP_PRIO YES
|-- MPRGN_BUSY_WAIT YES
|-- MPRGN_DIRTY_READ YES
|-- MPRGN_PRIO YES
|-- MPRGN_QUEUE YES
|-- MULTIO_BLOCK_CNT 8
|-- OMSREGIONS 8
|-- OMSRGNS 33
|-- PAGESIZE 8192
|-- PRIOBASE_COM 10
|-- PRIOBASE_IOC 80
|-- PRIOBASE_RAV 80
|-- PRIOBASE_REX 40
|-- PRIOBASE_U2U 100
|-- PRIOFACTOR 80
|-- READAHEADBLOBS 32
|-- RESTARTTIME 600
|-- ROWRGNS 8
|-- _RTEDUMPFILE rtedump
|-- SERVERDBFOR_SAP YES
|-- SERVERCMD_CACHE 23
|-- SERVERDESC_CACHE 93
|-- _SHAREDDYNDATA 419575
|-- _SHAREDDYNPOOL 155849
|-- _SHMKERNEL 3522556
|-- SVP1_CONV_FLUSH NO
|-- TABRGNS 8
|-- TASKCLUSTER01 tw;al;ut;2000*sv,100*bup;10*ev,10*gc;
|-- TASKCLUSTER02 ti,100*dw;5*us;
|-- TASKCLUSTER03 equalize
|-- TRANSRGNS 8
|-- _UNICODE YES
|-- USEIOPROCS_ONLY NO
|-- UTILITYPROTFILE dbm.utl
|-- WORKDATASIZE 8192
--- WORKSTACKSIZE 8192
Thanks!
Similar Messages
-
Catalog Cache Hitrate - Low hit rate in the catalog cache (76.170000% 80.
HI...
LC10
in Alert Display CCMS Monitoring architecture
We get error as
Context : MaxDB Monitoring: LCA
Caches : Caches
Short Name : Catalog Cache Hitrate
Alert Text : Low hit rate in the catalog cache (76.170000% < 80.000000%)
Rgds
PR
**http://maxdb.sap.com/doc/7_7/45/748f0f12e14022e10000000a1553f6/content.htm*Hello,
1. I recommend to create the SAP message & get SAP support.
2. It will be helpful to know the SAP basis SP on your system and the liveCache version you are currently using,
also what liveCache applications were running at the time when this Alert reported.
3. In general the information is available at MAXDB library:
http://maxdb.sap.com/doc/7_7/default.htm
Tools
Database Analyzer
Database Analyzer Messages
Catalog Cache Hit Rate
Regards, Natalia Khlopina -
Catalog Cache Hitrate is to low
The Catalog Cache Hitrate of the DB is to low (40%).
The DB is a MaxDB.
What can I do to bring the Catalog Cache up?Hello Damian,
You would need to increase the value of catalog cache.
Thanks,
Siva Kumar -
MaxDB Catalog cache hit ratio too low
Hello,
We use Maxdb version 7.6 and 7.7 on various systems.
Each week the report of EarlyWatch indicates to me that the catalog
cache ratio is too low. ( < 60% )
We modified the value of parameter CAT_CACHE_SIZE of 10000 towards
20000 then 50000 then 100000 then 200000 then 400000.
That did not have any effect on the catalog cache hit ratio.
In the forums sdn or the notes sap one gives no other method to improve
this ratio, except always increasing the value of parameter
CAT_CACHE_SIZE .
Do you have another solution?
Thank you in advance
Best regards
Frédéric Blaise
e-Kenz S.A.Hi,
don't be too afraid of this value. If you use (internally) some statements where all tables have to be checked like update statistics (with some option) or do some selects on some statistical info, then this value goes down as there are sooooooooo many tables around in a R/3-system that no CAT_CACHE_SIZE-value will be sufficient to hold all their descriptions in the cache. Therefore with some of these commands, the cache is turned round and round, thus reducing the hit ratio. It is a good idea to increase the value to hold many of the normally used descriptions in parallel, but opposite to other caches the hit ration will always be low.
Elke -
How to temporary bypass the catalog cache?
Hello!
We have E-Commerce 7.0 and the product catalog is in TREX. Isa has such a feature that it caches the catalog into the Catalog Cache. I dont want to switch that feature off but in some cases I would like to bypass the cache and make queries directly to TREX. How can I do that? Here is my code now:
CatalogBusinessObjectManager catBom =
(CatalogBusinessObjectManager) userSessionData.getBOM(
CatalogBusinessObjectManager.CATALOG_BOM);
if (catBom != null) {
WebCatInfo theCatalog = catBom.getCatalog();
WebCatItemList itemList = null;
CatalogFilterFactory fact = CatalogFilterFactory.getInstance();
IFilter filter = fact.createAttrContainValue("OBJECT_ID", productId);
try {
IQueryStatement queryStmt = theCatalog.getCatalog().createQueryStatement();
queryStmt.setStatement(filter, null, null);
IQuery query = theCatalog.getCatalog().createQuery(queryStmt,false);
itemList = new WebCatItemList(theCatalog, query);
I know that i can clean Catalog Cache but I dont what to do that either. I just want to bypass it in some cases. Please help me.
Thank you very much!
Regards,
MikkoThe reason why I want to do that is that I just want to check in ISA that TREX is up and running and queries can be made.
-
2 HD laptop setup - Best place to put Catalog, Cache, and Images??
If I have two 500GB 7200rpm hard drives in my MBP, what is the best place for LR3, the catalog/previews, ACRcache, and the image files themselves?
SATA3 connection for both HDs
Should the boot drive hold the app and cache, while the second drive hold the files and catalog?
Should the catalog and cache always stay on the same drive?
Does it matter at all where the image (raw) files sit?
Thanks!!I'd just keep the images on a different drive with a 2 internal drive setup.
-
Low database cache hit ratio (85%)
Hi Guys,
I understand that high db cache hit ratio doesn't indicates that the database is healthy.
The database might be doing additional "physical" reads due to un-tuned SQL.
However, can explain why a low cache hit ratio might not indicate the db is unhealthy, as in the db need additional memory allocated?
What i can think of is probably:
1. the database might query different data most of the time. As such the data is not read again from cache before it aged out. Even if i add additional memory, the data might not be read again (from memory).
2. ?
3. ?
I'm quite reluctant to list out the databases with below 90% hit ratio as part of the monthly report to the management. To them, below 90% means unhealthy.
If these ratios to be used in monthly report, there will be a long section to explain why these ratios cannot meet but there is no performance concern.
As such will need your expert advise on this.
thanks
Edited by: Chewy on Mar 13, 2012 1:23 AMNikolay Savvinov wrote:
In isolation, ratios are useless, but trends in ratios can point to potential problem. If your BCHR is steadily degrading over time, this is something to worry about (you'll have to examine your application for scalability issues)
I used to think that there was a case for trending through a ratio in the days when databases were small, simple and (by modern standards) not very busy. But I'm no longer sure it was even a good idea then. How much of a change do you need to see before you start worrying - and what time-granularity would you take as your baseline. When a ratio varies between 98% and 99% during daylight hours, how do you spot a very large problem that's only going make a change of 0.01% over the course of a couple of weeks ?
I really don't think there's any good SIMPLE way of producing a management sound-bite for every database in the system; each database needs a personal touch, and the number of figures you need to supply on each is not going to be easy to grasp without some graphic assistance. A suggestion I have is simply to pick three "representative" queries from the application (one "small", one "medium" and one "large") and run them once every hour, capturing the plan_hash_value, elapsed time, disk-reads, buffer gets, and CPU for each. A daily graph - 4 lines each - of each query will give management the big picture of variation in response time; a longer term graph based on the daily average with (say) best and worst excluded will give a trend warning. Obvously each database (or even application within database) needs its own three queries, and there may be periods during the day when it is not relevant to worry about a particular query.
(NB In the past I've run baseline queries from a pl/sql package called by dbms_job, or dbms_scheduler, and stored the resulting cost figures in the database - capturing all the session stats, wait event and time model information)
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
Author: <b><em>Oracle Core</em></b> -
Error backing up a catalog: "low on disc space"
I have problem backing up my catalog in Photoshop Elements 11. I have a new computer and want to move my Potoshop catalog to this one. But when trying to back up on the existing PC, it first says «Please wait while Elements Organizer prepares for backup...». Then after a while I get the following error message: «Your system is low on disc space, and Elements Organizer cannot perform this operation. Try removing some files to free up disk space.»
In trying to back up to a directory on a USB-drive with 847 GB free space. Photoshop says that it will back up «Size: 143 041,10 MB» so there should be plenty of free space. I have no problem copying files to this drive so it is accessible.
Photoshop is installed on the C-drive (18,4 GB free), while my picture files are stored either on C:, on E: (46,6 GB free) or on I: (1,76 GB free).
Any suggestion for how to solve this?
Info regarding my system from Photoshop «System Info» in «Help»:
Elements Organizer 11.0.0.0
Core Version: 11.0 (20120830.r.32025)
Language Version: 11.0 (20120830.r.32025)
Current Catalog:
Catalog Name: Arild PE v6
Catalog Location: C:\Documents and Settings\All Users\Application Data\Adobe\Elements Organizer\Catalogs\Arild PE v6\
Catalog Size: 71,5MB
Catalog Cache Size: 945,9MB
System:
Operating System Name: XP
Operating System Version: 5.1 Service Pack 3
System Architecture: Intel CPU Family:6 Model:15 Stepping:6 with MMX, SSE Integer, SSE FP
Built-in Memory: 3,5GB
Free Memory: 2,3GB
Important Drivers / Plug-ins / Libraries:
Microsoft DirectX Version: 9.0
Apple QuickTime Version: 7.70
Adobe Reader Version: 11.0
Adobe Acrobat Version: Not installedHi
I'm getting this same issue/ Also Elements 11. Backup was fine a month ago. Just wondering if you found a way to solve this.
I can do an incremental backup just fine but I want to do a full backup, Trying to get this to work to migrate to new laptop in a week or two's time.
I've tried all tips ago, incl re-indexing (which has helped organizer performance significantly)
My Catalog size to be backed up is 418GB (some of my catalog is on C: drive, but most of my videos in the catalog are stored on H: drive)
I have 865GB free on H: dirve, my target disk
I have 93GB of 451GB free on C: drive
System info below.
Many thanks for any advice
Stephen
Elements Organizer 11.0.0.0
Core Version: 11.0 (20120830.r.32025)
Language Version: 11.0 (20120830.r.32025)
Current Catalog:
Catalog Name: My Catalog
Catalog Location: C:\ProgramData\Adobe\Elements Organizer\Catalogs\My Catalog\
Catalog Size: 124.4MB
Catalog Cache Size: 3.6GB
System:
Operating System Name: Windows 7
Operating System Version: 6.1 Service Pack 1
System Architecture: Intel CPU Family:6 Model:14 Stepping:5 with MMX, SSE Integer, SSE FP
Built-in Memory: 3.9GB
Free Memory: 917.8MB
Important Drivers / Plug-ins / Libraries:
Microsoft DirectX Version: 9.0
Apple QuickTime Version: 7.75
Adobe Reader Version: 10.1
Adobe Acrobat Version: Not installed
CD and DVD drives:
D: (HL-DT-ST DVD+-RW BUS: 2 ID: 2 Firmware: GA31N) -
currently I hitting a performance issue on low data cache hit ratio (~0.5 after each calculation). Is there anyway to improve the hit ratio. I have been increased the size of data cache by 50%, however only have a litte improve.Is the bottleneck on I/O? How to improve?Our Essbase Version is 6.2.1 and using Buffered I/O.ThanksLouis
ok,.. the cache hit ratio indicates the percentage of time that a requested piece of information is already in the cache. A higher hit ratio indicates that the data is in the cache more often. This improves performance because the requested data does not have to be retrieved from disk for the next process. it sound like you must have a pretty sparse database,... you might be able to raise the rit ratio by using RLE compression over bit-map if your database is very sparse or has a lot of repeating values. you have to export all the data | clear the database | change the compression type to RLE | re-load the export file | and then calculate to see if it make much difference. (you have to export all data and then reload it so that you don't have mixed compression types) hope that helps you out.
-
E-Commerce Catalog error and log configurator probelm?
Hello,
I am trying to browse a B2B shop (http://server:port/b2b/b2b/init.do) in CRM 5.0 and getting following error message:
The catalog that you have selected is currently unavailable; try again later
I am using CRM 5.0 System (IDES Client) and basically trying to setup a E-Commerce development environment.
Following are the steps i have done so far for E-Commerce configuration:
1. Configured ISADMIN (http://server:port/isauseradm/admin/xcm/init.do).
1.1 Changed following params for Start->General Application Settings->Customer->isauseradm->isauseradmconfig:
>> SSLEnabled: false
>> appinfo: true
>> show start.jsp: true
>> AddUserToPartner: true
>> AcceptExistingUser: true
1.2 Created a new JCo connection under Start->Components->Customer->jco and entered my back end CRM System details and saved it as CRM_800_JCO. I also did Run Test which was successful.
1.3 Created a new JCo connection under Start->Application Configurations->Customer and entered following params and saved it with name CRM_800_ISAUSERADMSTD:
>> Base configuration: isauseradmStandard
>> default configuration: X
>> active configuration: X
>> jcodata: CRM_800_JCO
>> backendtype: crmdefault
>> usertype: CRM_Standalone
>> uidata: default
2. Configured B2B (http://server:port/b2b/admin/xcm/init.do).
2.1 Changed following params for Start->General Application Settings->Customer->b2b->b2bconfig:
>> SSLEnabled: false
>> appinfo: true
>> show start.jsp: true
2.2 Created a new JCo connection under Start->Components->Customer->jco and entered my backend CRM System details and saved it as CRM_800_JCO. I also did Run Test which was successful.
2.3 Created a new JCo connection under Start->Application Configurations->Customer and entered following params and saved it with name CRM_800_B2BCRMSTD:
>> Base configuration: b2bcrmstandard
>> default configuration: X
>> active configuration: X
>> jcodata: CRM_800_JCO
>> usertype: CRM_Standalone
3. I was really not sure if ShopAdmin config is required or not but i did it. Configured SHOPADMIN (http://server:port/shopadmin/admin/xcm/init.do).
3.1 Changed following params for Start->General Application Settings->Customer->shopadmin->shopadminconfig:
>> SSLEnabled: false
>> appinfo: true
>> show start.jsp: true
3.2 Created a new JCo connection under Start->Components->Customer->jco and entered my backend CRM System details and saved it as CRM_800_JCO. I also did Run Test which was successful.
3.3 Created a new JCo connection under Start->Application Configurations->Customer and entered following params and saved it with name CRM_800_CRMSHOP:
>> Base configuration: crmshop
>> default configuration: X
>> active configuration: X
>> jcodata: CRM_800_JCO
4. Restarted CRM J2EE.
5. Setup TREX 7.0.
5.1 From TREX Administration, created a new connection (Type A, i.e., using System Number and Application Server Host) for CRM System and also RFC Dest. (sm59) in CRM System.
5.2 Restarted TREX and connected this connection.
5.3 Following are the visible column values for this connection in TREX Administration:
>> Connection Status: <connected>
>> Configuration Satus: Green
>> SAP System: CRM
>> RFC Destination: TREX_DEFAULT
>> Gateway: local
>> RfcServer Instances: 1 (no automatic changes)
>> TREXRfcServer Processes: 1
>> Workprocesses: 6 (4 DIA, 2BGD)
5.4 Did a RFC Connection test in CRM System using SM59 which was successful as well.
5.5 Using Transaction SRMO (Retrieval : Search Server Relation Monitor) in CRM System changed the RFC Destination for Search server ID DRFUZZY (for both I and S type RFC Server destination action) to TREX_DEFAULT. Did a connection test for this and it was successful.
6. Initiated Replication using transaction COMM_PCAT_IMS_INIT with following params:
>> Product Catalog: PCSHOP
>> Variant: VAR_EN
>> Search Server Relation: DRFUZZY
>> Publishing Computer ID:
>> Allows Parallel Processing:
>> Publish Documents via HTTP:
>> Transfer Document Content: X
>> Package Size of Indexing: 5,000
>> Processing Existing Indexes: Overwrite Index Only When OK
>> Behavior when Error Messages Occur: Cancel Variant Replication
7. Logged into ISADMIN User Config (http://server:port/isauseradm/useradmin/init.do) and created a new user using the option New User and New Contact Person with an existing company of type Sold-To Party and which belongs to the sales area that is linked with PCSHOP Product Catalog (from IDES data).
Now, when i log into B2B Page (http://server:port/b2b/b2b/init.do) using the newly created user and click on the shop PC4BIZ_EN i get following error message:
The catalog that you have selected is currently unavailable; try again later
I also created a new Product Catalog and did the Initial Replication still getting the same above error message.
Can anyone please tell me what am i missing or what mistake i have done?
I even restarted CRM System, but the result is still the same.
Also, how do i check the log files for B2B? I checked the E-Commerce Admin Console (http://server:port/b2b/admin/index.jsp) and clicked on the logging link, but i get the following message:
Logging is now configured centrally in the J2EE Engine Visual Administrator (server service: Log Configurator)
How exactly do i configure and what is the right path for B2B Logging in J2EE Visual Administrator? And where will the log files for B2B be stored on the server?
I would really appreciate (and of course award points) for your help on this.
thanks and regards,
VasuThanks for the Note reference. I will go through it now and try to check the log files.
And regarding the error message i don't think it could be because of The Catalog variant is not set right in the Shopadmin application as all the values in Shopadmin seem to be correct. Anyhow following are the current values selected for the custom Product Catalog I created:
Shop Id: ZTEST
General Information
--> Usage
> Business Scenario: Internet Sales B2B
--> Authorizations
> Authorization Group: <blank>
--> User Administration
> Partner Function Contact Person: 00000015
> Country Group: <blank>
--> Billing Documents
> Display of Billing Documents: No Billing Documents
--> Store Locator
> Display Store Locator: <blank>
Catalog
--> Product Catalog
> Catalog Search: <blank>
> Catalog: ZTEST
> Catalog Variant: VAR_EN
> Catalog View: <blank>
> Hide Internal Catalog: <blank>
> Controlling Price Determination in the Catalog: via IPC
> Profile group for pricing related attributes for exchange products: <blank>
Transactions
--> General
> Allow transaction management for other business partners in hierarchy: <blank>
> Large Documents: Display All Items
> Document type can be chosen late: <blank>
> Batch Processing Permitted: <blank>
> Display product determination information: X
--> Order
> Choose Order Types: Order Type
> Order Type: ISBB
--> Order Template
> Order Templates Allowed: X
> Order Type: ISBB
--> +Quotations
> Creating a Quotation: No Quotation
--> Contracts
> Contract Determination: <blank>
--> Contract Negotiations
> Allow Contract Negotiations: <blank>
Marketing
--> Global Product Recommendation
> Display Global Product Recommendation: <blank>
--> Personalized Product Recommendation
> Display Personalized Product Recommendation: <blank>
--> +Product-Related Proposals +
> Display Product-Related Proposals: <blank>
--> +Campaigns +
> Allow manual entry of campaigns: <blank>
Auction
--> Auction
> Auctions allowed: <blank>
Regarding the reason "The Catalog was not replicated properly and is hence unavailable on TREX", is there any way to verify this? When i ran Initial Replication (transaction COMM_PCAT_IMS_INIT with the values i said in my first post) everything was green.
Also, how do i clear the Catalog Cache? Is it the same as clearing the Catalog Cache Statistics in E-Commerce Administration Console (http://server:port/b2b/admin/index.jsp)?
Thanks and Regards, -
How can I increase my Library Cache Hit Ratio?
I was wondering if anyone can help me out regarding the values that I am getting for my Library Cache hits stats
Half of the samples that I have taken on a periodic interval today have ranged from 89% to 96%.
The SQL that I have used is,
SELECT
sysdate,
SUM(PINS-RELOADS)/SUM(PINS)*100
from v\$librarycache
Also, Running the AWR report for 4am to 4pm, see below
Shared Pool Statistics AWR report
Begin End
Memory Usage %: 50.83 42.43
% SQL with executions>1: 55.56 77.13
% Memory for SQL w/exec>1: 74.12
Regarding the current SGA settings,
SQL> show parameter sga_target;
NAME TYPE VALUE
sga_target big integer 1184M
SQL>
SQL> select pool,name,bytes/1048576 "Size in MB" from v$sgastat where name = 'free memory';
POOL NAME Size in MB
shared pool free memory 135.742641
large pool free memory 15.9389648
java pool free memory 16
The main questions are,
a) is the low Library cache hit ration particularly low?
b) if I want to improve this figure, it is advised that the 'SHARED_POOL_SIZE' parameter should be increased.
Obviously Oracle itself is in charge of this at present, so what can I do to improve?
c) Are there any really good links to help me to understand the figures that appear in the AWR report.a) is the low Library cache hit ration particularly low?
I didnt understand this.Can you please rephrase?
b)
Well indeed that shared pool controls the allocation and everything about Library Cache but it doesnt mean that increasing the value will stop all the issues.Its among the hardest parameters to be tuned infact for the reason that what primarly comes into it,sql statements,code and all that,that is not written entirely by a dba/tuner.Its by developers who does some times not so good things that are required to make shared pool work properly.Very commonly occuring mistake can be quoted as the lack of use of bind variabls and constant use of literals.In that case,eventualy we will have a hard parse of all the statements which will eat up the shared pool some time or the other.No matter what size it may be,it will end to the same result.Hit ratio is a guiding factor,not the end goal of tuning.Its been documented so many places,here,other forums,even in OU books also that looking and tuning alone the hit ratio may not end up at the expected or right results.You should look for the Parse statistics in the AWR report how they are working.How many are Parse(hard),Parse(total) statistics coming up?What is the sql execute to parse time,elapsed time and the related statistics.They will be helpful in getting things sorted out more nicely and correctly.
I am sure I have missed so much than I said.Surely you will get more better advice on this.Have patience and wait.
b)Documentation will be a good point.Performance tuning in that is a good resource.
http://www.oracle.com/pls/db102/to_toc?pathname=server.102%
2Fb14211%2Ftoc.htm&remark=portal+%28Getting+Started%29
I am not sure about a specific book about AWR but this one is good for over all knowledge about tuning of Oracle.
http://www.mcgraw-hill.co.uk/html/007222729X.html
Aman.... -
External Lacie drive crashed, now I can't find a recent backup for LR3 catalog.
I have LR3 on a Mac running OS 10.6.8. My catalog, cache and images were stored on an external Lacie 1TB drive (partitioned so it can be used by a pc and mac). My catalog backup files were saved on another external drive MXTR as well as copies of all my images. The Lacie had started self ejecting while running LR3 and I thought it was maybe a problem with the cable, as it would work fine after rebooting. Yesterday it crashed LR3 and never remounted. I used Disk Utility to try and repair the disk and it instructed me to erase the disk and start over. I was not worried, since my catalog was backed up to another drive. 10 hours to erase the drive (the drive never quit during this time). Another 2 hours to copy my images from the MXTR to the Lacie (again the drive never self ejected during this time either). So now I go to find my most recent backup catalog. The most recent one I could find was dated 2/09/2012. I am baffled. What happened all those times I was backing up? I upgraded to LR3 in 2011, and I don't remember ever telling it to backup to another folder or drive.
Any ideas as to what the problem may be?What happened all those times I was backing up?
We can't possibly know what happend or where your backups are. But you can indeed search all of your available hard disks and see if there are any files named *.LRCAT with recent dates.
By the way, I would be extremely wary of using that Lacie drive any more. If it was me, I'd replace it. -
Photoshop Elements 9 Organizer corrupts Catalog with program crash in offset 000253da
I have just installed Photoshop Elements 9 on a Windows PC (no previous version was installed) but when I try and load photos into the organizer either from a camera or photos already on hard disk I get the following error shortly after Organizer has generated the thumbnails :
Elements 9 Organizer has encountered a problem and needs to close
Error signature
AppName photosopelementsorganizer.exe AppVer 9.0.0.0 ModName tbb.ddll
ModVer 2.2.2009.1011 Offset 000253da
Following this error the program closes and the catalog is left in a corrupt state so that when Organizer is restarted it crashes again with the same error. The only way to get Organizer to start cleanly is by creating a new catalog in the Catalog Manager window by pressing [Ctrl] [Alt] and [Shift] in the Welcome Window when starting Organizer. Repairing the catalog does not resolve the corruption.
I have tried un-installing Photoshop Elements, then re-installing, downloading the latest updates from Adobe and then loading in just one photo from either my camera (Pentax k-x) or a file on the Hard Disk, but still get the same catalog corruption.
Can anyone help? The Organizer is unusable with this fault. My Windows XP operating system is at the latest fix level and all other software on this machine is extremely robust.
My system information is :
Elements Organizer 9.0.0.0
Core Version: 9.0 (20100905.m.9093)
Language Version: 9.0 (20100905.m.9093)
Current Catalog:
Catalog Name: Catalog Dave 6
Catalog Location: C:\Documents and Settings\All Users\Application Data\Adobe\Elements Organizer\Catalogs\Catalog Dave 6
Catalog Size: 129KB
Catalog Cache Size: 25KB
System:
Operating System Name: XP
Operating System Version: 5.1 Service Pack 3
System Architecture: AMD CPU Family:6 Model:10 Stepping:0 with MMX, SSE Integer, SSE FP
Built-in Memory: 959.5MB
Free Memory: 230MB
Important Drivers / Plug-ins / Libraries:
Microsoft DirectX Version: 9.0
Apple QuickTime Version: 7.13
Adobe Reader Version: 9.3
Adobe Acrobat Version: Not installed
CD and DVD drives:
D: (DVD+RW RW5240 BUS: 1 ID: 1 Firmware: 1.19)Try the solution here:
http://feedback.photoshop.com/photoshop_family/topics/photoshop_elements_9_cd_program_purc hased#reply_5626978 -
Converting Catalogs From Photoshop Album (2.0) to Photoshop Elements 13
My Photoshop Album currently holds some 23,000 photographs/jpgs from 2003 to date. Each is catalogued to provide a very high level of granularity to assist in identifying specific photographs and their subject content.
I have purchased Photoshop Elements to update the management of the collection. The Adobe information identifies it to be a suitable successor but indicates that there is likely to be some effort required to update the catalog. Using Catalog Manager I have attempted to Convert the old catalog which didn’t work because it “couldn’t be seen” in Elements. Further research of the Forum and other Adobe resources pointed to the Catalog Conversion Utility. Again this also failed to work.
I then reduced the number of photographs to about 4,500 (year 2014). I have noted by the way that this has had virtually no effect on the psa file size. As I gradually Deleted photos from the Catalog I have come across about 100 items which cannot be deleted and returns the following error message….
The ODBC data source reported the following error:
“[Microsoft][ODBC Microsoft Access Driver] Indicator variable required but not supplied on column number 10 (fDateTimeHigh)”
I have tried a number of ways to delete the photos including removing them from their folder; “Reconnecting the Missing File” with/without the same name in a different Folder; and generally getting myself very frustrated with my lack of knowledge. Googling/search for help and pointers hasn’t found the solution.
What or where can I find a way of deleting these errant files? (I have MS Access if this is a tool that might get me the solution.)
Note that one very old and small Catalog did convert and successfully integrated into Elements so it all works in principle. It seems to point to these problem jpgs.
Any help would be much appreciated.
FROM PHOTOSHOP ALBUM
System Details:
#### NOTE the following is incorrect (copied from “System Info” on Album Help. This is the original build OS; currently Windows 7 with latest updates. See further down for Photoshop Elements for System Info. ####
Adobe Photoshop Album 2.0.0.16271
Current Catalog:
Catalog Name: Main.psa
Catalog Location: C:\ProgramData\Adobe\Photoshop Album\Catalogs\
Catalog Size: 45MB
Catalog Cache Size: 536MB
System:
Operating System Name: XP
Operating System Version: 6.1 Service Pack 1
System Architecture: Intel CPU Family:6 Model:10 Stepping:7 with MMX, SSE Integer, SSE FP
Built-in Memory: 4GB
Free Memory: 4GB
Important Drivers / Plug-ins / Libraries:
Microsoft DirectX Version: 9.0
Apple QuickTime Version: 7.76
Adobe Image Viewer Version: Not installed
Adobe Acrobat Reader Version: 11.0
Adobe Acrobat Version: 10.1
Adobe Atmosphere Version: 3.0
FROM PHOTOSHOP ELEMENTS
Elements Organizer 13.0.0.0
Core Version: 13.0 (20140831.m.62014)
Language Version: 13.0 (20140831.m.62014)
Current Catalog:
Catalog Name: Mum & Dad Document Scans
Catalog Location: C:\ProgramData\Adobe\Photoshop Album\Catalogs\New Catalogs\Mum & Dad Document Scans\
Catalog Size: 132KB
Catalog Cache Size: 31.8KB
System:
Operating System Name: Windows 7
Operating System Version: 6.1 Service Pack 1
System Architecture:
Built-in Memory: 8GB
Free Memory: 4.6GB
Important Drivers / Plug-ins / Libraries:
Microsoft DirectX Version: 9.0
Apple QuickTime Version: Not installed
Adobe Reader Version: Not installed
Adobe Acrobat Version: 10.1
CD and DVD drives:
D: (ATAPI iHAS624 BUS: 1 ID: 1 Firmware: B)Yes. Thanks for your reply. Just shows my age doesn't it! I've really got to upgrade my system from Windows 3.1.
Still it lasted me for quite a few years over its various issues and wasn't a bad investment for me considering I think it was a magazine cover freebie, though I can't remember whether it was issued on 5" or 3" disks!!!
Interesting to note that conceptually the original design was quite good and together with the new face recognition technology which is as good if not better than Picasa the PSE13 Organiser does a good job. It's a pity Adobe doesn't promote more of this side of the product. I wasn't able to find the replacement to Album as it was starting to struggle as the photo count increased and I hadn't been able to find an obvious replacement. Still that's over now.
All I have to do now is to raise a query as to how to order/sort certain tags and it's a near perfect product for me. Can't find the answer in the Manual(s) or FAQs - or I'm not using the correct query key words. -
Maximum size for Elements 6.0 catalog
Is there a maximum size for a singular catalog to keep it operating efficiently? Everything I read is that the maximum size is dependent on your hard drive space. I recently had to restore everything from a backup due to a fatal crash. I keep my photo files on an external, USB powered hard drive. Right now I have over 41,000 photos which is about 125 GB. These are family photos I would like to keep together for tagging but am I pushing my Organizer beyond its limits?
My catalog is on my external hard drive. Here is the information from Help->System Info.:
Current Catalog:
Catalog Name:
Catalog Location: F:\\
Catalog Size: 133.7MB
Catalog Cache Size: 841.8MB
When I look on the external drive, there are 5 files under the F: drive (my external hard drive). How do I moved the Catalog.psedb, backupCT.dat, collstatus.dat, itemData.mk4, tagstatus.dat, thumb.5.cache? Can I copy/paste to a new folder within my C: drive or do I need to do something within PSE? The next step I haven't mentioned is that once I get my PSE 6.0 up and running after its fatal crash, I want to convert to 8.0. Currently, I'm getting a "cannot convert" error message in 8.0 and that I need to repair my catalog in 6.0. When I try to repair, I'm told there are "no errors". If you have a link for instructions to convert to 8.0 too, it be most helpful! Thanks for all your help!!
Maybe you are looking for
-
How do I get my boot applescript to quit once it has finished?
I have a script set up to boot Google Chrome on Login in fullscreen mode in Mountain Lion The problem is that the script is left in my dock and it needs to force quitted to exit. The script seems fairly simple but I'm not sure how to get the .app scr
-
The connection was reset + Secure connection failed
Hello, I am using firefox on the web and I keep encountering the same error message that the connection was reset when I try to access webpages. Following the article at https://support.mozilla.org/en-US/questions/1024256 did not solve the problem as
-
THis seems to be very basic, but I want to make sure the diff between line item and header.
-
Insert Only Forms with custom Page Lifecycle class
I discovered a bug/inconsistency in the JHS Generator: The business case (translated to the HR schema) is We have an Insert Only form for creating new Departments. (Group name: DepartmentsEntry) When the user saves a new Department, we want to forwar
-
Slow playback while using pedometer
Newbie here. Not sure if this has been covered. If so, please provide a link to the previous discussion. My 5th Gen iPod Nano has slow playback when I use the pedometer. Has there been any solution provided by Apple for this issue? I know I'm not the