In which cases result cache status is changed to bypass?
In which cases result cache status is changed to bypass? Because I tried but it wans't changed?
Hi,
http://docs.oracle.com/cd/B28359_01/appdev.111/b28419/d_result_cache.htm
https://docs.google.com/viewer?a=v&q=cache:k4wT-iwL31MJ:www.sfoug.org/Downloads/Oracle11g_Results_Cache_20100304.pdf+oracle11g+result+cache+status+is+changed&hl=en&gl=in&pid=bl&srcid=ADGEEShUeeW6XPH7QiCtgUVzykmgGKA6l9NX5IZ322f-QIWU5D3qy-18KD7ffddW7t51PYL_xoJORhZLWg7K53SIrLfJd7R8jrcuPWoPgH3SwxOI6jFXeEYg7rhY1NTqriiakg_3qDgn&sig=AHIEtbQBpfc562bezPUIyojHZZ7kuxzI9Q
http://www.dba-oracle.com/oracle11g/oracle_11g_result_cache_sql_hint.htm
Regards
Hitgon
Edited by: hitgon on Apr 28, 2012 6:43 PM
Similar Messages
-
Hi,
I am trying to enable result cache feature at Oracle 11g R2 server. I am following the below steps but everytime the cache status is coming DISABLED. Please help me to correct it
1. I am setting the RESULT_CACHE_MAX_SIZE and RESULT_CACHE_MODE as below. Memory_Target is 1232M, so I chose to allocate 200M for result cache.
ALTER SYSTEM SET RESULT_CACHE_MODE=FORCE
System altered.
ALTER SYSTEM SET RESULT_CACHE_MAX_SIZE=200M
System altered.2. I am restarting the database
SHUTDOWN IMMEDIATE
STARTUP
3. I am querying the result cache status
SELECT DBMS_RESULT_CACHE.STATUS FROM DUAL
STATUS
DISABLEDI have configured other database servers for cache and its working fine. But i am not able to sort out what is going wrong in above scripts. Please help me if i am missing out some step.
Thanksselect * from v$version;you need "Enterprise Edition" http://docs.oracle.com/cd/B28359_01/license.111/b28287/editions.htm
for EE check this:
http://psoug.org/reference/dbms_result_cache.html
http://www.oracle.com/technetwork/articles/datawarehouse/vallath-resultcache-rac-284280.html
http://www.oracle-developer.net/display.php?id=503 -
In which case we require to write perform using/changing
hi,
in which case we require to write perform using/changing .
and what is excatly we r doing with perform using.changing.
please somebody help me.
thanks
subhasisThis is an awfully basic question.
Simply press F1 on PERFORM.
And responders take note:
bapi
Rob -
I create and populate the following table in my schema:
create table plch_table (id number, time_sleep number);
begin
insert into plch_table values (1, 20);
commit;
end;
Then I create this function (it compiles successfully, since my schema has EXECUTE authority on DBMS_LOCK):
create or replace function plch_func
return number
result_cache
is
l_time_sleep number;
begin
select time_sleep
into l_time_sleep
from plch_table
where id = 1;
dbms_lock.sleep(l_time_sleep);
return l_time_sleep;
end;
I then start up a second session, connected to the same schema, and execute this block:
declare
res number := plch_func;
begin
null;
end;
Within five seconds of executing the above block, I go back to the first session and I run this block:
declare
t1 number;
t2 number;
begin
t1 := dbms_utility.get_time;
dbms_output.put_line(plch_func);
t2 := dbms_utility.get_time;
dbms_output.put_line('Execute in '||round((t2-t1)/100)||' seconds');
end;
what will be displayed after this block executes?
And the result is:
20
Execute in 30 secondsHowever, I don't understand why? I mean what is going on behind this? Why the result 30? Could somebody tell me why?Honestly, before yesterday's PL/SQL Challenge question, I had no idea how this worked either. This is very much a deep internals question-- you'd likely have to go looking for a very specialized presentation or blog post to get more detail (or you'd have to do the research yourself). And even then, it's relatively unlikely that they would go into much more detail than the PL/SQL Challenge answer did. Julain Dyke's Result Cache Internals (PPT) is probably one of the more detailed presentations about the internals of the result cache.
The set of valid statuses for a result cache object are documented in the Oracle Database Reference entry for the v$result_cache_objects view. The two 10 second timeouts are controlled by the database- and session-level settings of the undocumented resultcache_timeout parameter (which, based on this blog post by Vladimir Begun was set to 60 seconds in 11.1.0.6 and changed to 11.1.0.7 to 10 seconds.
Justin -
Negative result caching, aggregation threads
I have two questions:
1. Do any of the coherence caches do "negative" result caching? An example to explain what I mean:
I have a near cache in front of a partitioned cache which is backed by a database. I do a get which looks in the near cache, partitioned cache, and DB and doesn't find the value. If I then do another get for the same key will coherence go all the way to the DB again to look for it? Does containsKey work the same way?
2. Is it to increase the number of threads used for aggregation on a single coherence node? I have a machine with lots of cores, and a parallel aggregator that uses a fair bit of CPU. I would like Coherence to run multiple instances of the aggregator in parallel without me having to start lots of processes.Hi Cormac,
if I understand correctly, what you mean is: in case there are idle threads in the thread-pool, you want them to be utilized by multiple threads working on the same aggregation within the same storage node, dividing the partitions among them.
Splitting a single partition between multiple aggregators would contradicts answers to questions regarding the behaviour of aggregators and possibly also break documented API, and anyway would render parallel aggregations unusable by weakening guarantees about aggregating entries for which a partition affinity is defined together with each other.
The above things are not possible in the current version, and I am not sure if it is possible in upcoming versions, but some changes in the just released developer pre-release version make this less costly than it was up to 3.4.2.
One of the problems is that AbstractAggregator is stateful in the sense that it wants you to maintain a temporary result in an attribute of the aggregator, therefore
- either your code would have to be thread-safe (which requirement is not documented and therefore introducing it would possibly break existing code out there). This would possibly also mean an increased cost in context switching due to synchronizing your changes to those attributes on multiple threads.
- or Coherence would have to instantiate multiple instances of your aggregator within the same storage node which comes with a somewhat increased memory footprint. Otherwise this would be doable.
On the other hand, you should remember that just because a thread is idle at a moment, it does not mean that there won't be many more requests coming in very soon afterwards which would be unnecessarily delayed by the parallel aggregator which consumes too many threads.
Best regards,
Robert -
Oracle 11g/R2 Query Result Cache - Incremental Update
Hi,
In Oracle 11g/R2, I created replica of HR.Employees table & executed the following statement (+Although using SUM() function is non-logical in this case, but just testifying the result+)
STEP - 1
SELECT /+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)*
FROM HR.Employees_copy
WHERE department_id = 20
GROUP BY employee_id, first_name, last_name;
EMPLOYEE_ID FIRST_NAME LAST_NAME SUM(SALARY)
202 Pat Fay 6000
201 Michael Hartstein 13000
Elapsed: 00:00:00.01
Execution Plan
Plan hash value: 3837552314
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 130 | 4 (25)| 00:00:01 |
| 1 | RESULT CACHE | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
| 2 | HASH GROUP BY | | 2 | 130 | 4 (25)| 00:00:01 |
|* 3 | TABLE ACCESS FULL | EMPLOYEES_COPY | 2 | 130 | 3 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------- Statistics
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
*690* bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed
STEP - 2
INSERT INTO HR.employees_copy
VALUES(200, 'Dummy', 'User','[email protected]',NULL, sysdate, 'MANAGER',5000, NULL,NULL,20);
STEP - 3
SELECT /*+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)
FROM HR.Employees_copy
WHERE department_id = 20
GROUP BY employee_id, first_name, last_name;
EMPLOYEE_ID FIRST_NAME LAST_NAME SUM(SALARY)
202 Pat Fay 6000
201 Michael Hartstein 13000
200 Dummy User 5000
Elapsed: 00:00:00.03
Execution Plan
Plan hash value: 3837552314
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 3 | 195 | 4 (25)| 00:00:01 |
| 1 | RESULT CACHE | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
| 2 | HASH GROUP BY | | 3 | 195 | 4 (25)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| EMPLOYEES_COPY | 3 | 195 | 3 (0)| 00:00:01 |
Statistics
0 recursive calls
0 db block gets
4 consistent gets
0 physical reads
0 redo size
*714* bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3 rows processed
In the execution plan of STEP-3, against ID-1 the operation RESULT CACHE is shown which shows the result has been retrieved directly from Result cache. Does this mean that Oracle Server has Incrementally Retrieved the resultset?
Because, before the execution of STEP-2, the cache contained only 2 records. Then 1 record was inserted but after STEP-3, a total of 3 records was returned from cache. Does this mean that newly inserted row is retrieved from database and merged to the cached result of STEP-1?
If Oracle server has incrementally retrieved and merged newly inserted record, what mechanism is being used by the Oracle to do so?
Regards,
Wasif
Edited by: 965300 on Oct 15, 2012 12:25 AM965300 wrote:
Hi,
In Oracle 11g/R2, I created replica of HR.Employees table & executed the following statement (+Although using SUM() function is non-logical in this case, but just testifying the result+)
STEP - 1
SELECT /+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)*
FROM HR.Employees_copy
WHERE department_id = 20
GROUP BY employee_id, first_name, last_name;
EMPLOYEE_ID FIRST_NAME LAST_NAME SUM(SALARY)
202 Pat Fay 6000
201 Michael Hartstein 13000
Elapsed: 00:00:00.01
Execution Plan
Plan hash value: 3837552314
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 130 | 4 (25)| 00:00:01 |
| 1 | RESULT CACHE | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
| 2 | HASH GROUP BY | | 2 | 130 | 4 (25)| 00:00:01 |
|* 3 | TABLE ACCESS FULL | EMPLOYEES_COPY | 2 | 130 | 3 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------- Statistics
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
*690* bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed
STEP - 2
INSERT INTO HR.employees_copy
VALUES(200, 'Dummy', 'User','[email protected]',NULL, sysdate, 'MANAGER',5000, NULL,NULL,20);
STEP - 3
SELECT /*+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)
FROM HR.Employees_copy
WHERE department_id = 20
GROUP BY employee_id, first_name, last_name;
EMPLOYEE_ID FIRST_NAME LAST_NAME SUM(SALARY)
202 Pat Fay 6000
201 Michael Hartstein 13000
200 Dummy User 5000
Elapsed: 00:00:00.03
Execution Plan
Plan hash value: 3837552314
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 3 | 195 | 4 (25)| 00:00:01 |
| 1 | RESULT CACHE | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
| 2 | HASH GROUP BY | | 3 | 195 | 4 (25)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| EMPLOYEES_COPY | 3 | 195 | 3 (0)| 00:00:01 |
Statistics
0 recursive calls
0 db block gets
4 consistent gets
0 physical reads
0 redo size
*714* bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3 rows processed
In the execution plan of STEP-3, against ID-1 the operation RESULT CACHE is shown which shows the result has been retrieved directly from Result cache. Does this mean that Oracle Server has Incrementally Retrieved the resultset?
Because, before the execution of STEP-2, the cache contained only 2 records. Then 1 record was inserted but after STEP-3, a total of 3 records was returned from cache. Does this mean that newly inserted row is retrieved from database and merged to the cached result of STEP-1?
If Oracle server has incrementally retrieved and merged newly inserted record, what mechanism is being used by the Oracle to do so?
Regards,
Wasif
Edited by: 965300 on Oct 15, 2012 12:25 AMNo, the RESULT CACHE operation doesn't necessarily mean that the results are retrieved from there. It could be being
written to there.
Look at the number of consistent gets: it's zero in the first step (I assume you had already run this query before) and I would
conclude that the data is being read from the result cache.
In the third step there are 4 consistent gets. I would conclude that the data is being written to the result cache, a fourth step repeating
the SQL should show zero consistent gets and that would be the results being read. -
Oracle result cache and functions
Hi All,
I am on 11.2 in Linux.
I want to use Oracle's result cache to cache results of (user defined) functions, which we use in SELECT commands.
My question is, does result caching work for deterministic and non-deterministic functions ?
Just curious, how Oracle keeps track of changes being made which affect a function's return value?
Thoughts please.
Thanks in advanceI want to ... cache results of (user defined) functions, which we use in SELECT commands.You have four choices:
1. Subquery caching - (wrap function call in SELECT) useful for repeated function calls in a single SELECT
2. Marking function as DETERMINISTIC - inconsistent results across versions, deterministic best reserved for function-based indexes only
3. Result Cache
4. Move function logic out of function and inline to the main SQL statement.
The biggest downside of any function call that is inline to SQL is that it bypasses the read consistency mechanism, actually that's probably the second biggest downside. The biggest downside is normally that their misuse kills performance.
If your function itself contains SQL then you should seriously reconsider whether you should be using a function.
does result caching work for deterministic and non-deterministic functions ?Result cache knows nothing about determinism so yes it should be applied regardless.
Oracle keeps track of changes being made which affect a function's return value?See v$result_cache_dependency.
The mechanism is very blunt, there is no fine-grained tracking of data changes that may affect your result.
It's as simple as function F1 relies on table T1. If the data in table T1 changes, invalidate the results in the result cache for F1. -
Hi,
I have read that there is SGA component called Result Cache. According to documentation:
The result cache buffers query results. If a query is run that
already has results in the result cache, the database returns
results from the result cache instead of rerunning the query. This
speeds up the execution of frequently run queries.
What happens when data in tables which are called by the query change. I think the old data still will be in cache and I always get the old data.
Peter D.Hi Peter
What happens when data in tables which are called by
the query change. I think the old data still will be in
cache and I always get the old data.The server result cache can return old data only for distributed queries (i.e. queries also retrieving data from other databases).
All local queries return the correct data. To do so, the cache entries are invalidated as required.
HTH
Chris Antognini
Author of Troubleshooting Oracle Performance, Apress 2008 (http://top.antognini.ch) -
Hi,
Apologies for double posting!
I had posted this query to the SQL and PL/SQL forum last week.
Result cache latch contention: http://forums.oracle.com/forums/thread.jspa?threadID=1553667&tstart=0*
Posting it here again as I am trying to better understand the RC Latch behavior before using it in our production system.
Thanks!
Edited by: kedruwsky on Oct 10, 2010 9:32 PMsb92075 wrote:
Latches exist to manage potential contention.
What else do you not understand?
Since latches exist, they will used used regardless if you understand or not.That was profound!
Did you check the results of the 3 test cases that were executed to check how the RC Latch was used?
Results of the test cases show how many times the latch was acquired (per invocation and throughout the iterations).
I want to understand the why behind the results?
i.e. 2 latch GETS per request and acquire/release of the latch when there is a change in the request signature.
Also, result of test case 3 does not fit with the observations of test case 1 & 2.
Concurrent executions of the test cases have shown degraded performance.
Thus, I am not ready to implement this feature until I understand how it works and if there are any ways to reduce the contention. -
We are planning to implement OSB result caching feature in our project.We did the following to do a POC.
1.Created a DBAdapter to select from a table and created a BS out of that.
2.Enabled Result caching with TTL for 5 mins.
3.Invoked the BS from a PS.
4.Tested the PS by invoking from test cosnole.
5.Response was received as expected.
6.Changed the value in the table and tested again within 5 mins.
7.New values were returned instead of the ones in the cache.
What might be the problem?Should it not return the old value from table?Each cached result is uniquely identified by a cache key that is made up of the ServiceRef (the unique identifier for the service which is the fully qualified path name of the service), the operation being invoked, and a cache token String. The cache token helps to uniquely identify a single cache result among other cache results for one business service. You control the value of the cache token. You can set the cache token either by configuring the Cache Token expression in the Result Caching configuration for the business service or by using the cache-token metadata element in the $transportMetaData using the proxy service message flow. If there is no cache-token defined then caching may not work as expected.
Please refer -
35.7.5 Improving Performance by Caching Business Service Results at http://download.oracle.com/docs/cd/E17904_01/doc.1111/e15867/configuringandusingservices.htm#CHDDCGEE
http://blogs.oracle.com/MarkSmith/2011/03/osb_and_coherence_integration_1.html
Regards,
Anuj -
Activating objects but not showing up in Cache Status Overview
Hello SDN.
We are on PI 7.1. A strange thing started happening. I can change an object and when activate, I get success, but in checking the Cache Status Overview it does not show up. We set up our SLD as follows
SC_Systems (i.e. SAP, Outside Vendors etc) (all service interfaces and actions go here)
Which is the parent of SC_Mapping (all operation and message mappings)
Which is the parent of SC_Models (all DataTypes, ED, and Message Types go here)
So
SC_SYstems
SC_Mappings
SC_Models
When making a change to the models SC and activate we get the following problem above, also for SC_Systems. But when I do this for mapping there is no issue.
One thing that is different is a new developer accessed the DataType from the SC_Systems and changed it, thus causing a conflict (We usually access and change DT, MT etc in SC_Models.
Since then it seems both are not allowing activation even though it says successful.
We have bounced the system, done all cache refreshes that I can think of and are about to go back 3 days to a backup store point, but want to see if anyone has any idea what the problem could be.
Thanks in advance
Cheers
DevlinHi,
It may be too obvious, but have you checked the Change Activation List?
Perhaps the objects that were edited are "pending" there.
Clear the SLD cache later.
Regards,
Caio Cagnani -
Result cache refresh at business service (OSB) ?
Hi,
I have configured my business service at osb side to cache the result. My business service is pointing JCA- dbadapter which is calling a stored procedure.
Is there any way to update/refresh the cache if there are any changes in the corresponding tables which the store procedure is using. I have googled a lot on this but have found nothing on it.
Any help would be very much appreciated.Is there any way to update/refresh the cache if there are any changes in the corresponding tables which the store procedure is using. I have googled a lot on this but have found nothing onAFAIK, OSB does not provide any functionality to refresh the cache without hitting the backend sevice.
The result caching in Oracle Service Bus is available for Business Services. When you enable result caching, the Business Service will not reach the backend service and the response will come from the cache entry. This entry can expire and if the entry is expired, the next service call will indeed reach the backend service and update the cache entry.
Each cached result has its own TTL. When a TTL is reached, Oracle Coherence flushes that individual cached result.
It would be better if you can raise a support ticket and ask for enhancement.
Regards,
Abhinav Gupta -
Which kind of cache group is suitable for the intensive insertion operation
Hi Chris,sorry for call you directly. Because you give me many good answers about my many newbile questions these days:)
You told me that the dynamic cache group is not suitable for the intensive insertion operation
because each INSERT to a child table has to perform an existence check against Oracle even if load the cache group into RAM manually(Please correct me if wrong).
Here I have many log tables that they only have a primary key and no foreign references and they are basically used to reflect changes from the related main tables.
Every insert/update/delete on the main table will insert a log record in the related logging table(No direct foreign references).
In order to cache these log tables, I have to create a independent cache group for each one, right?
I do not want load these logs data into RAM because my application do not use them or these logs will waste my RAM clearly.
so here comes my question.Which kind of cache group should I use to gain the best performance with no loading them into RAM?
As my understand,the dynamic cache group load data on demand while the regular cache group need load all the data into RAM firstly and it won't load data from oracle anymore?
Thanks in advance
SuoNayiLet me be more specific. Consider this cache group:
CREATE DYNAMIC ASYNCHRONOUS WRITETHROUGH CACHE GROUP CG_SWT
FROM
TPARENT
PPK NUMBER(8,0) NOT NULL PRIMARY KEY,
PCOL1 VARCHAR2(100)
TCHILD
CPK NUMBER(6,0) NOT NULL PRIMARY KEY,
CFK NUMBER(8,0) NOT NULL,
CCOL1 VARCHAR2(20),
FOREIGN KEY ( CFK ) REFERENCES TPARENT ( PPK )
INSERTS into TPARENT will not do any existence check in Oracle. An INSERT INTO TCHILD has to verify that the corresponding parent row exists. If the parent row exists in TimesTen then no check is doen in Oracle. If the parent row does not exist in TimesTen then we have to check if it exists in Oracle and if it does we will load it into TimesTen from Oracle (along with any other child rows) before completing the INSERT in TimesTen. So in the case where the parent always exists already in TimesTen there is no overhead but on the other case there is a lot of overhead.
If your log table is truly not related to the main table (not in TT and not in Oracle either) then they should go into separate cache groups. If each insert into the log table has a unique key and there is no possibility of duplicates then you do not need to load anything into RAM. You can start with an empty table and just insert into it (since each insert is unique). Of course, if you just keep inserting you will eventually fuill up the memory in TimesTen. So, you need a mechanism to 'purge' no longer needed rows from TimesTen (they will still exist in Oracle of course). There are really two options; investigate TimesTen auotmatic aging (see documentation) - thsi may be adeuate of the insert rate is not too high - or implement a custom purge mechanism using UNLOAD CACHE GROUP (see documentation).
Chris -
Status profile changing on UD in stead of RR
In TC: BS02 Iu2019ve created a status profile with two stages. So the usage decision cannot carried out before all characteristics are closed.
Stage 1: u201CINITu201D
- Make usage decision (forbidden)
- Record results (permitted)
Stage 2: u201CQFINu201D
- Close inspection complete (permitted)
- Complete insp. - short-term (permitted)
- Start inv. pstg bef. UsageDec (forbidden)
The status profile does work properly.
My problem is the following:
The status is changing on the usage decision from INIT to QFIN.
Iu2019d like a status change in the inspection lot when the result recording is finished. QFIN triggers an event for a workflow, the workflow must start before the usage decision.
Is it possible to adjust the status profile, that the status is changing at finishing RR? Or is there a function module which can do that? I'm working in 4.7.
Thanks in advance.
Regards,
Rene
Edited by: Rene Fuhner on Apr 1, 2011 10:26 AMDid you try to select the 'Set' indicator in status 'QFIN' when the business transaction occur like as 'Close inspection - complete' or 'Complete insp. - short-term'?
Regards
Luke -
Update -- I realize this was asked and answered back in 2011 here:
http://social.technet.microsoft.com/Forums/en-US/22c41af2-2829-4be0-9a09-3389f764123a/is-it-possible-to-set-cx600-presence-status-from-deskphone-dnd-busy-avail-etc?forum=ocsclients So now my basic question is, has anything changed?
The symptom: After 5 minutes, this stand lone Polycom CX600 phone running in basic mode sets itself to away. While presence is 'away', it can receive calls directly to its extension, however it does not receive calls from the reponse group it's
a member of. While it's set to 'available' presense, it receives calls from the response group successfully. A Lync resource account with an external phone number/full enterprise voice, is used to sign into this phone. It's Lync 2013.
I've tried within the phone menu in basic mode, to reset the presnse to 'available', but it will still revert to 'away' after five minutes. I've done a hard reset, externally provisioned with USB cable, and from the computer Lync client, set my 'away'
time to the max allowed 360 minutes. However, this did not carry over to the hard phone. I realize I might be able to jury-rig the calendar to always show as available, but I beleive this would only remain if the phone kept operating in enhanced
mode (which is not an option at this site).
So my question is: can a CX600 in basic mode be set to have a longer status/presence change setting to Away that 5 minutes? Again, I realize the answer from that 2011 thread above is 'no', thought I'd ask again in case anything's changed in 2 years.I didn’t notice there is option on Lync CX600 to control this setting.
The setting from Lync desktop isn’t related with the setting on phone.
That setting is controlled by the local registry key.
For Response Group, you can select Attendant routing method.
Attendant offers a new call to all agents who are signed into Lync Server and the Response Group application at the same time, regardless of their current presence.
Lisa Zheng
TechNet Community Support
Maybe you are looking for
-
JApplet communication with CORBA naming service
I have an applet that needs to resolve/bind to object in the CORBA naming service (we are using Orbix2000). The applet is able to establish a socket connection to the host that is running the naming service; however it cannot find the naming service.
-
Is it possible to create a spatial index on a view?
Hi Is it possible to create a spatial index on a view? We would like to link our spatial tables to each other (using only one of the SDO_GEOMETRY fields) in a view & make it very easy so that anybody can work with the data. Thanks Caroline.
-
HAS ANYONE GOTTEN A DELIVERY DATE FOR THERE IPHONE 4 FROM UPS ?
JUST WONDERING IF ANYONES TRACKING INFO FROM UPS GIVES THEM A DELIVERY DATE.
-
Actual Costs , Planned Costs and Variance
Hi, I know that there is a standard function module that gives the actual costs , tha target costs and variance, K_KKB_KKBCS_ORDER_REPORT. But is there any Funcion Module for calculating the Actual and planned costs and variance? or could the same
-
What is the size and number of documents we can upload to a document library at one time
HI what is the size and number of documents we can upload to a document library at one time. like we have scenario , we developed a custom solution to scan and upload documents to sharepoint sites doc library. here we face issues when user try to upl