Takes longer to run Explain on ANSI syntax?
Hi,
I rewrote a rather involved query (35 tables) from Oracle join syntax to ANSI.
To run explain plan took about 4-5 secs with the old query,
while the new one takes 1.5 minutes! The queries produce identical plans
and result sets.
Is this a shortcoming in Oracle/ something you would expect?
Is this maybe a known issue?
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
Thank you!
A
Slow_moe wrote:
There is most certainly a difference here. I use Toad 10.5.1.3 in both cases, just pressing the button.
We do have some issues with the database, performance wise, though. I'm starting to think it is
not completely sane (upgraded from 9i, not installed from scratch).
Well, I realize this is impossible to "solve" in a forum. I justed wanted to run this by you
folks prior to sending it to our (external) DBA; could have been a known bug when tables get many, or something.
Thanks!
AThat could be an issue in Toad.
We've just installed Toad 10.5.1.3 and they certainly have seemed to manage to introduce a multitude of bugs in it. ?:|
Similar Messages
-
DELETE Statement takes long time running
Hi,
DELETE statements take a very long time to complete.
Can you advice me how can I diagnostic the slow performance
ThanksDeleting rows can be an expensive operation.
Oracle stores entire row as the 'before image' of deleted information (table and index data) in
rollback segments, generates redo (keeps archiver busy), updates free lists for blocks that are
falling below PCTUSED setting etc..
Select count(*) runs longer because oracle scans all blocks (upto the High Water Mark) whether
there are or are not any rows in the blocks.
These operations will take more and more time, if the tables are loaded with APPEND hint or
SQL*Loader using direct mode.
A long time ago, I ran into a similar situation. Data was "deleted" selectively, SQL*Loader was
used (in DIRECT mode) to add new rows to the table. I had a few tables using more number of blocks
than the number of rows in the table!
Needless to say, the process was changed to truncate tables after exporting/extracting required
data first, and then loading the data back. Worked much better. -
Normal accruals and accruals reruns takes longer to run
Hi
I am in Banking and i have a problem with the Accrual runs.They take a long time to run and complete.
Can somebody assist on where i can look to fix this problem.Hi, refer to note 1387986
-
Query takes longer to run with indexes.
Here is my situation. I had a query which I use to run in Production (oracle 9.2.0.5) and Reporting database (9.2.0.3). The time taken to run in both databases was almost the same until 2 months ago which was about 2 minutes. Now in production the query does not run at all where as in Reporting it continues to run in about 2 minutes. Some of the things I obsevred in P are 1) the optimizer_index_cost_adj parameter was changed to 20 from 100 in order to improve the performance of a paycalc program about 3 months ago. Even with this parameter being set to 20, the query use to run in 2 minutes until 2 months ago. in the last two months the GL table increased in size from 25 million rows to 27 million rows. With optimizer_index_cost_adj of 20 and Gl table of 25 million rows it runs fine, but with 27 million rows it does not run at all. If I change the value of optimizer_index_cost_adj to 100 then the query runs with 27 million rows in 2 minutes and I found that it uses full table scan. In Reporting database it always used full table sacn as found thru explain plan. CBO determines which scan is best and it uses that. So my question is that by making optimizer_index_cost_adj = 20, does oracle forces it to use index scan when the table size is 27 million rows? Isn't the index scan is not faster than full table scan? In what situation the full table scan is faster than index scan? If I drop all the indexes on the GL table then the query runs faster in production as it uses full table scan. What is the real benefit of changing optimizer_index_cost_adj values? Any input is most welcome.
Isn't the index scan is not faster than full table scan? In what situation the full table scan is faster than index scan? No. It is not about which one is the "+fastest+" as that concept is flawed. How can an index be "faster" than a table for example? Does it have better tires and shinier paint job? ;-)
It is about the amount of I/O that the database needs to perform in order to use that object's contents for resolving/executing that applicable SQL statement.
If the CBO determines that it needs a 100 widgets worth of I/O to scan the index, and then another 100 widgets of I/O to scan the table, it may decide to not use the index at all, as a full table scan will cost only a 180 I/O widgets - 20 less than the combined scanning of index and table.
Also, a full scan can make use of multi-block reads - and this, on most storage/file systems, is faster than single block reads.
So no - a full table scan is NOT a Bad Thing (tm) and not an indicator of a problem. The thing that is of concern is the amount of I/O. The more I/O, the slower the operation. So obviously, we want to make sure that we design SQL that requires the minimal amount of I/O, design a database that support minimal I/O to find the required data (using clusters/partitions/IOTs/indexes/etc), and then check that the CBO also follows suit (which can be the complex bit).
But before questioning the CBO, first question your code and design - and whether or not they provide the optimal (smallest) I/O footprint for the job at hand. -
Report takes long time to refresh
Hi Experts ,
I have an issue ,when I have only used one workshet to upload as the original workbook has about 25 worksheet tabs and is too large to upload.The report takes 20 minutes to refresh I was rebuilding the report when I noticed that when I added the formulas to the bottom to the original report the report started to take longer to run.When the bottom section of the report is not included the report refreshed in about 2 minutes.
In the file that completes faster,there are no formulas after line 353
In the file that takes long time o refresh,the start on line 357.When the bottom section of the report is not included the report refreshed in about 2 minutes.Any sugession.Thanku.
Regards
R@vihi,
If you discover significant high frontend time, check whether the formatting is the reason. If so, either switch it off or reduce the result lines.
As formatting information is not transferred, the time consumed in the frontend can be reduced.
But the workbook you are executing may obviously take much time.
Message was edited by: AVR - Intelli -
Update statement takes too long to run
Hello,
I am running this simple update statement, but it takes too long to run. It was running for 16 hours and then I cancelled it. It was not even finished. The destination table that I am updating has 2.6 million records, but I am only updating 206K records. If add ROWNUM <20 to the update statement works just fine and updates the right column with the right information. Do you have any ideas what could be wrong in my update statement? I am also using a DB link since CAP.ESS_LOOKUP table resides in different db from the destination table. We are running 11g Oracle Db.
UPDATE DEV_OCS.DOCMETA IPM
SET IPM.XIPM_APP_2_17 = (SELECT DISTINCT LKP.DOC_STATUS
FROM [email protected] LKP
WHERE LKP.DOC_NUM = IPM.XIPM_APP_2_1 AND
IPM.XIPMSYS_APP_ID = 2
WHERE
IPM.XIPMSYS_APP_ID = 2;
Thanks,
Ilyamatthew_morris wrote:
In the first SQL, the SELECT against the remote table was a correlated subquery. the 'WHERE LKP.DOC_NUM = IPM.XIPM_APP_2_1 AND IPM.XIPMSYS_APP_ID = 2" means that the subquery had to run once for each row of DEV_OCS.DOCMETA being evaluated. This might have meant thousands of iterations, meaning a great deal of network traffic (not to mention each performing a DISTINCT operation). Queries where the data is split between two or more databases are much more expensive than queries using only tables in a single database.Sorry to disappoint you again, but with clause by itself doesn't prevent from "subquery had to run once for each row of DEV_OCS.DOCMETA being evaluated". For example:
{code}
SQL> set linesize 132
SQL> explain plan for
2 update emp e
3 set deptno = (select t.deptno from dept@sol10 t where e.deptno = t.deptno)
4 /
Explained.
SQL> @?\rdbms\admin\utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 3247731149
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | UPDATE STATEMENT | | 14 | 42 | 17 (83)| 00:00:01 | | |
| 1 | UPDATE | EMP | | | | | | |
| 2 | TABLE ACCESS FULL| EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
| 3 | REMOTE | DEPT | 1 | 13 | 0 (0)| 00:00:01 | SOL10 | R->S |
PLAN_TABLE_OUTPUT
Remote SQL Information (identified by operation id):
3 - SELECT "DEPTNO" FROM "DEPT" "T" WHERE "DEPTNO"=:1 (accessing 'SOL10' )
16 rows selected.
SQL> explain plan for
2 update emp e
3 set deptno = (with t as (select * from dept@sol10) select t.deptno from t where e.deptno = t.deptno)
4 /
Explained.
SQL> @?\rdbms\admin\utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 3247731149
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | UPDATE STATEMENT | | 14 | 42 | 17 (83)| 00:00:01 | | |
| 1 | UPDATE | EMP | | | | | | |
| 2 | TABLE ACCESS FULL| EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
| 3 | REMOTE | DEPT | 1 | 13 | 0 (0)| 00:00:01 | SOL10 | R->S |
PLAN_TABLE_OUTPUT
Remote SQL Information (identified by operation id):
3 - SELECT "DEPTNO" FROM "DEPT" "DEPT" WHERE "DEPTNO"=:1 (accessing 'SOL10' )
16 rows selected.
SQL>
{code}
As you can see, WITH clause by itself guaranties nothing. We must force optimizer to materialize it:
{code}
SQL> explain plan for
2 update emp e
3 set deptno = (with t as (select /*+ materialize */ * from dept@sol10) select t.deptno from t where e.deptno = t.deptno
4 /
Explained.
SQL> @?\rdbms\admin\utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 3568118945
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | UPDATE STATEMENT | | 14 | 42 | 87 (17)| 00:00:02 | | |
| 1 | UPDATE | EMP | | | | | | |
| 2 | TABLE ACCESS FULL | EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
| 3 | TEMP TABLE TRANSFORMATION | | | | | | | |
| 4 | LOAD AS SELECT | SYS_TEMP_0FD9D6603_1CEEEBC | | | | | | |
| 5 | REMOTE | DEPT | 4 | 80 | 3 (0)| 00:00:01 | SOL10 | R->S |
PLAN_TABLE_OUTPUT
|* 6 | VIEW | | 4 | 52 | 2 (0)| 00:00:01 | | |
| 7 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6603_1CEEEBC | 4 | 80 | 2 (0)| 00:00:01 | | |
Predicate Information (identified by operation id):
6 - filter("T"."DEPTNO"=:B1)
Remote SQL Information (identified by operation id):
PLAN_TABLE_OUTPUT
5 - SELECT "DEPTNO","DNAME","LOC" FROM "DEPT" "DEPT" (accessing 'SOL10' )
25 rows selected.
SQL>
{code}
I do know hint materialize is not documented, but I don't know any other way besides splitting statement in two to materialize it.
SY. -
The 0co_om_opa_6 ip in the process chains takes long time to run
Hi experts,
The 0co_om_opa_6 ip in the process chains takes long time to run around 5 hours in production
I have checked the note 382329,
-> where the indexes 1 and 4 are active
-> index 4 was not "Index does not exist in database system ORACLE"- i have assgined to " Indexes on all database systems and ran the delta load in development system, but guess there are not much data in dev it took 2-1/2 hrs to run as it was taking earlier. so didnt find much differnce in performance.
As per the note Note 549552 - CO line item extractors: performance, i have checked in the table BWOM_SETTINGS these are the settings that are there in the ECC system.
-> OLTPSOURCE - is blank
PARAM_NAME - OBJSELSIZE
PARAM_VALUE- is blank
-> OLTPSOURCE - is blank
PARAM_NAME - NOTSSELECT
PARAM_VALUE- is blank
-> OLTPSOURCE- 0CO_OM_OPA_6
PARAM_NAME - NOBLOCKING
PARAM_VALUE- is blank.
Could you please check if any other settings needs to be done .
Also for the IP there is selction criteris for FISCALYEAR/PERIOD from 2004-2099, also an inti is done for the same period as a result it becoming difficult for me to load for a single year.
Please suggest.The problem was the index 4 was not active in the database level..it was recommended by the SAP team to activate it in se14..however while doing so we face few issues se14 is a very sensitive transaction should be handled carefully ... it should be activate not created.
The OBJSELSIZE in the table BWOM_SETTINGS has to be Marked 'X' to improve the quality as well as the indexe 4 should be activate at the abap level i.e in the table COEP -> INDEXES-> INDEX 4 -> Select the u201Cindex on all database systemu201D in place of u201CNo database indexu201D, once it is activated in the table abap level you can activate the same indexes in the database level.
Be very carefull while you execute it in se14 best is to use db02 to do the same , basis tend to make less mistake there.
Thanks Hope this helps .. -
Running a page takes long time
Hi,
Running in page takes a long time in JDeveloper.
I see that, every time I run a page, the tool archives/loads the following
Tutalii: C:\jdevbin\jdev\appslibrt\iasjoc.zip
Tutalii: C:\jdevbin\jdev\appslibrt\fwk.zip
It seems that this process itself takes a lot of time - nearly 5 minutes.
This adds to the page rendering time of nearly 10 minutes.
We are doing remote development as we don't have an option. Can we prevent the loading of above zip files again and again ?
Any other way to make the pages run faster ?
Thanks,
AmitI believe that link doesn't help much.
The Tutali seems to only happen on the initial startup of a page the first time. If you keep the J2EE running then the next page load isn't so bad. There is no need to restart the J2EE as redeployment clears the classloader cache, etc..
I've also found that if I have a lot of projects built in "myclasses", this "Tutali" thing takes longer. So when I finish with one project, I clear my directory under "myclasses".
This might help. -
It takes long time to load the parameter's value when running we run report
Hi,
It takes long time to load the parameter's value when running we run report. What could cause this? How to troubleshoot the behavior of the report? Could I use Profile and what events should i select?
ThanksHi jori5,
Based on my understanding, after changing the parameter, the report render very slow, right?
In Reporting Service, the total time to generate a report include TimeDataRetreval, TimeProcessing and TimeRendering. To analyze which section take much time, we can check the table Executionlog3 in the ReportServer database. For more information, please
refer to this article:
More tips to improve performance of SSRS reports.
In your scenario, since you mention the query spends less time, the delay might happens during report processing and report rendering section. So you should check Executionlog3 to check which section costs most of time, then you can refer to this article
to optimize your report:
Troubleshooting Reports: Report Performance.
If you have any question, please feel free to ask.
Best regards,
Qiuyun Yu -
NAC Agent takes long time to run
Cisco NAC agent takes long time to popup or run on Windows 7 machine.
The client machine is windows 7, running nac agent 4.9.0.42, against ISE 1.1.1
Any ideas how to reduce NAC Agent timing?Hi Tariq,
I'm facing the same issue with ISE 1.1.1 (268) with Agent 4.9.0.47 for Windows XP clients. I have already configured "yes" to disabled the l3 swiss delay and reduced the httpa discovery timer from 30 to 05 sec but still clients get aprox 2.30 minutes to popup and finished the posture discovery.
Can you please advise if this is the minimum time or what is the minimum time and what are the parameters to set to a minimum time to complete agent popup and posture discovery..?
Is there any option that we can run this on backgroup..?
thanks in advance.. -
Oracle9i reports take longer time while running in web
Hi,
I have developed few reports in Oracle9i and I am trying to run the reports in web. Running a report through report builder takes lesser time compare to running the same report in web using web.show_document. This also depends on the file size. If my report file size(.jsp file) is less than 100KB then it takes 1 minute to show the parameter form and another 1 minute to show the report output. If my file size is around 190KB then the system takes atleast 15 minutes to show the parameter form. Another 10 to 15 minutes to show the report output. I don't understand why the system takes long time to show the parameter form.
I have a similar problem while opening the file in reports builder also. If my file size is more than 150KB then it takes more than 15 minutes to open the file.
Could anyone please help me on this.
Thanks, RadhaThis problem exists only with .jsp reports. I saved the reports in .rdf format and they run faster on web now. Opening a .jsp report takes longer time(file with 600KB takes atleast 2 hours) but the same report in .rdf format takes few seconds to get opened in reports builder.
-
Auto message restart job take long time to run
Dear all,
I have configre the auto message restart job in sdoe_bg_job_monitor
but it take long time to run.
i have execute the report i have find out that it is faching records from smmw_msg_hdr .
in the table present 67 laks records are there.
actually it is taking a lot of time while getting data from the table
is there any report or tcode for clearing data in the table.
I need ur valiuble help to resolve the issue.
Regards
lakshman balanaguHI,
If you are using oracle database you may need to run table statistics report (RSOANARA) to update the index. The system admin should be able to this.
Regards,
Vikas
Edited by: Vikas Lamba on Aug 3, 2010 1:20 PM -
recently switched from powermac to mac pro5 2x2.66GHz 6 -core xeon running FCP 7 on OS 10.6.8 exporting qt files and rendering project files now takes longer than my old machine looking at activity mon and % of user is aroun 6 and % of idle around 42 are all the cores being used with FCP7?
johnnyapplesod wrote:
exporting qt files and rendering project files now takes longer than my old machine
It's because the PowerMac G5 had HUGE bandwidth with a fat BUS on each processsor so they could run hard, hot and heavy.
It's not so with the Intel processors which are multi-core and share a common bus thus bottleneck.
The newer Intel processors can do more work on the CPU, but when it comes to in/out they are s-l-o-w.
I've had a wickedly fast dual processor (not dual core) G5, RAID 0 pair of 10,000 RPM drives as boot and a fast video card, I could do a lot very very fast.
You'll likely have to upgrade to Final Cut X to get more cores utilized, prepare to cry a little bit, Apple is working on the features they stripped out of it to make amends to pro users who complained loudly. (all over TV too)
http://www.loopinsight.com/2011/09/20/apple-releases-major-update-to-final-cut-p ro-x-release-demo-version/ -
BPM Process chain takes long time to process
We have BI7, Netweaver 2004s on Oracle and SUN Solaris
There is a process chain (BPM) which pulls data from the CRM system into BW. The scheduled time to run this chain is 0034 hrs. This chain should ideally complete before / around 0830 Hrs. <b>Now the problem is that every alternate day this chain behaves normally and gets completed well before 0830 hrs but every alternate day this chain fails </b> there are almost 40 chains running daily. Some are event triggered (dependent with each other) or some run in parallel. In this, (BPM) process chain, usually there are 5 requests with 3 Delta and 2 full uploads (Master Data). The delta uploads finishes in 30 minutes without any issues with very few record transfers. The first full upload is from 0034 hrs to approximately 0130 hrs and the 2nd upload is from 0130 hrs to 0230 hrs. Now if the 1st upload gets delayed then the people who are initiating these chains, stop the 2nd full upload and continue it after all the process chains are completed. Now this entire BPM process chain sometimes takes 17 -18 hrs to complete!!!!!
No other loads in CRM or BW when these process chains are running
CRM has background jobs to push IDOCS to BW which run every 2 minutes which runs successfully
Yesterday this chain got completed successfully (well within stipulated time) with over 33,00,000 records transferred but sometimes it has failed to transfer even 12,00,000 records!!
Attaching a zip file, please refer the 21 to 26 Analysis screen shot.doc from the zip file
Within the zip file, attaching Normal timings of daily process chains.xls the name explains it .
Also within the zip file refer BPM Infoprovider and data source screen shot.doc please refer this file as the infopackage (page 2) which was used in the process chain is not displayed later on in page number 6 BUT CHAIN GOT SUCESSFULLY COMPLETED
We have analyzed:--
1) The PSA data for BPM process chain for past few days
2) The info providers for BPM process chain for past few days
3) The ODS entries for BPM process chain for past few days
4) The point of failure of BPM process chain for past few days
5) The overall performance of all the process chains for past few days
6) The number of requests in BW for this process chain
7) The load on CRM system for past few days when this process chain ran on BW system
As per our analysis, there are couple of things which can be fixed in the BW system:--
1) The partner agreement (transaction WE20) defined for the partner LS/BP3CLNT475 mentions both message types RSSEND and RSINFO: -- collect IDOCs and pack size = 1 Since the pack size = 1 will generate 1 TRFC call per IDOC, it should be changed to 10 so that less number of TRFCs will be generated thus less overhead for the BW server resulting in the increase in performance
2) In the definition of destination for the concerned RFC in BW (SM59), the Technical Setting tab says the Load balancing option = No. We are planning to make it Yes
But we believe that though these changes will bring some increase in performance, this is not the root cause of the abnormal behavior of this chain as this chain runs successfully on every alternate day with approximately the same amount of load in it.
I was not able to attach the many screen shots or the info which I had gathered during my analysis. Please advice how do I attach these files
Best Regards,Hi,
Normally index creation or deletion can take long time in case your database statistics are not updated properly, so can check stat after your data loading is completed and index generation is done, Do creation of database statistics.
Then try to recheck ...
Regards,
Satya -
Why it takes long time to execute on Production than staging?
Hi Experts,
Any help apreciated on below issue.
I have one anonymous block for updating around 1 million records by joining 9 tables.
This is proceeded to production by following environments. And all env have exact equal volume of data.
development->Testing->Staging->Production.
The funny problem is while it takes 5 mins to be executed in all environments, it takes 30 mins on production.
why it happned and what can be action points for future?
Thanks
-J
==============
If the performance is that different in the different environments, one or more statements must have different query plans in the different environments. The first step would be to get the query plans and compare them to figure out which statement(s) is/are running slowly.
If there are different query plans, that implies that something is different between the environments. That could be any of
- Oracle version
- initialization parameters
- data
- object statistics
- system statistics
If you guarantee that the data is the same, I would tend to expect that the object statistics are different. How have you gathered statistics in the various environments? Can you move statistics from an environment where performance is acceptable to the environment where performance is unacceptable?
I would also recommend following the advice others have given you. You don't want to commit in a loop and you want to do as much processing in SQL as possible.
Justin
===============
Thanks Steve for your inputs.
My investigation resulted following 2 points.
There are 2 main reasons why some scripts might take longer in live than on Staging.
1: Weekend backups were running on the live server so slowing the server down allot.
2: the tables are re-orged when they are imported into staging/Dev – so the table and index layout is optimal, on live the tables and indexes are not necessarily contiguous so in order to do the same work the server will need to do many more I/O operations.
Can we have some action points to address these above issues?
I think if data can be contigous then it may help.
Best Regards
-J
===============
But before that, can you raise this in a seperate thread as there is a different issue going on in this thread?
Cheers
Sarma.
===========
Posts: 4
Registered: 08/28/06
Re: Performance issue (Oracle 10.2.0.3.0)
Posted: May 22, 2009 2:46 AM in response to: Radhakrishna Sa... Edit Reply
Hey Sarma,
Exterme aplogies to say that I don't know how to raise a new thread.
Thanks in advnce for your help.
-J
user636482
Posts: 202
Registered: 05/15/08
Re: Performance issue (Oracle 10.2.0.3.0)
Posted: May 22, 2009 2:51 AM in response to: user527345 Reply
Hi User 527345,
Please follow the steps to raise a request in this Forum.
1. Register urself.
2. Go to the forum home and select the Technolgy where do you want to rasie a request.
eg : If is related to Oracle DATAbase general then select Oracle databse general...
3. clik on post new thread
4. Give the summary of your issue.
5. then submit the issue.
please let me know if you need more information.
Thank youJayashree Mohanty wrote:
My investigation resulted following 2 points.
There are 2 main reasons why some scripts might take longer in live than on Staging.
1: Weekend backups were running on the live server so slowing the server down allot.
2: the tables are re-orged when they are imported into staging/Dev – so the table and index layout is optimal, on live the tables and indexes are not necessarily contiguous so in order to do the same work the server will need to do many more I/O operations.
Can we have some action points to address these above issues?
I think if data can be contigous then it may help.First , I didn't get at all what actually was that thing when you copied some part of don't know which post in your actual question? Please read this , it would help you post a proper question to get a proper answer ,
http://www.catb.org/~esr/faqs/smart-questions.html
Now, how did you come to the conclusion that the backups are actually making your query slower? What's the benchmark that you lead to this? And what's the meaning of the 2nd point , can you please explain it ?
As others have also mentioned, please post the plan of the query at boththe staging and production, that only can tell that what's going on.
HTH
Aman....
Maybe you are looking for
-
iMac 3.4 GHz Intel Core i7 - 16 GB 1333 MHz DDR3 I downloaded the CC app and then downloaded After Effects 12.2 Installed the 12.2.1.5 update. I feel that all my issues are interrelated so I'm posting them together. Here are the problems: After Effec
-
GRN date as deafult Base Line Date in MIRO
Dear All, The client requirement is like this, The default Base Line date for MIRO Document should be the GRN Date (MIGO Date). But in Payment Term configuration only Document Date, Posting Date, No default, Entry Date are there. We can change the ba
-
My Site Links Not Working When Using Alternate Access Mapping
I have a SharePoint Site Collection under my main Web Application: mywebapp:80. I had to extend this to an Intranet zone Web Application: mywebapp:101. I am able to open my Site Collection from either address. The authentication protocols for both ar
-
Any email folder that isn't downloaded by thunderbird causes the entire program to lag extremely bad. I can't read emails, access emails or do anything till downloading is done. Is there anything I can do to help the program perform better?
-
I just upgraded to CS6, but my Canon Mark 5D III CR2 images won't load, and everytime I try to download the 6.7 plug-in, it tells me it is "not for me." Help!