Performance issue while coding
Hi Experts,
If we use ranges in a particular report, could u please explain in detail whether the performance decreases or increases?
Thanks & Regards,
Maha.
Hi,
Select-options lead to an internal table in the background having structure for eg.
Sign = I/E
Option = EQ/BT/ GT/LE ....
Low = Low value which u feed in the first input box
High = High value which u feed in the second input box
The advantage u get is that u can have multiple option attached to select-options that u can have the data filled onto the screen , u can use multiple options attached to select-options such as NO Interval..and u can have multiple selection on the selection screen.
While in Ranges u don't have liberty of all these things as u it is filled at runtime with the data from select query ..
PS: Reward Points if helpfull.
Regards
Naveen Gupta
Similar Messages
-
Performance issue while opening the report
HI,
I am working BO XI R3.1.there is performance issue while opening the report in BO Solris Server but on window server it is compratively fast.
we have few reports which contains 5 fixed prompt 7 optional prompt.
out of 5 fixed prompt 3 prompt is static (it contains 3 -4 record only )which is coming from materlied view.
we have already use many thing for improve performance in report like-
1) Index Awareness
2) Aggregate Awareness
3) Array fatch size-250
3) Aray bind time -32767
4) Login time out -600
the issue is that before refresh opening the report iteslf taking time 1.30 min on BO solris server but same report taking time in BO window server 45 sec. even we import on others BO solris server it is taking same time as per old solris server(1.30 min).
when we close the trace in solris server than it is taking 1.15 sec time.it should not be intial phase it is not hitting more on database.so why it is taking that much time while opening the report.
could you please guide us where exectly problem is there and how we can improve performance for opening the report.In case the problem related to solris server so what would be and how can we rectify.
Incase any further input require for the same feel free to ask me.Hi Kumar,
If this is happening with all the reports then this issue seems to be due to firewall or security settings of Solaris OS.
Please try to lower down the security level in solaris and test for the issue.
Regards,
Chaitanya Deshpande -
Performance issue while generating Query
Hi BI Gurus.
I am facing performance issue while generating query on 0IC_C03.
It has a variable as (from & to) for generating the report for a particular time duration.
if the variable (from & to) fields is filled then after taking a long time it shows run time error.
& if the query is executed without mentioning the variable(which is optional) then the data is extracted from beginning to till date. the same takes less time in execution.
& after that the period has to be selected manually by option keep filter value. please suggest how can i solve the error
Regards
RitikaHI RITIKA,
WEL COME TO SDN.
YOUHAVE TO CHECK THE FOLLOWING RUN TIME SEGMENTS USING ST03N TCODE:
High Database Runtime
High OLAP Runtime
High Frontend Runtime
if its high Database Runtime :
- check the aggregates or create aggregates on cube and this helps you.
if its High OLAP Runtime :
- check the user exits if any.
- check the hier. are used and fetching in deep level.
If its high frontend runtime:
- check if a very high number of cells and formattings are transferred to the Frontend ( use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
For From and to date variables, create one more set and use it and try.
Regs,
VACHAN -
I'm facing performance issue while accessing the PLAF Table
Dar all,
I'm facing performance issue while accessing the PLAF Table.
The START-OF-SELECTION of the report starts with the following select query.
SELECT plnum pwwrk matnr gsmng psttr FROM plaf
INTO CORRESPONDING FIELDS OF TABLE it_tab
WHERE matnr IN s_matnr
AND pwwrk = p_pwwrk
AND psttr IN s_psttr
AND auffx = 'X'
AND paart = 'LA' .
While executing the report in the Quality system it does not face any performance issue...
When it comes to Production System the above said select query itself it is taking 15 - 20 minutes time to move further.
Kindly help me to over come this problem...
Regards,
JessiHi,
"Just implement its primary Key
WHERE PLNUM BETWEEN '0000000001' AND '9999999999' " By this you are implementing the Primary Key
This statement has nothing to do with performance, because system is not able to use primary key or uses every row.
Jessica, your query uses secondary index created by SAP:
1 (Material, plant) which uses fields MANDT MATNR and PLWRK.
but it is not suitable in your case.
You can consider adding new one, which would containt all fields: MANDT, MATNR, PWWRK, PSTTR AUFFX PAART
or - but it depends on number of rows meeting and not meeting (auffx = 'X' AND paart = 'LA' ) condition.
It could speed the performance, if you would create secondary index based on fields MANDT, MATNR, PWWRK, PSTTR
and do like Ramchander suggested: remove AUFFX and PAART from index and where section, and remove these unwanted rows
after the query using DELETE statement.
Regards,
Przemysław
Please check how many rows in production system -
Performance issue while selecting material documents MKPF & MSEG
Hello,
I'm facing performance issues in production while selecting Material documents for Sales order and item based on the Sales order Stock.
Here is the query :
I'm first selecting data from ebew table which is the Sales order Stock table then this query.
IF ibew[] IS NOT INITIAL AND ignore_material_documents IS INITIAL.
* Select the Material documents created for the the sales orders.
SELECT mkpf~mblnr mkpf~budat
mseg~matnr mseg~mat_kdauf mseg~mat_kdpos mseg~shkzg
mseg~dmbtr mseg~menge
INTO CORRESPONDING FIELDS OF TABLE i_mseg
FROM mkpf INNER JOIN mseg
ON mkpf~mandt = mseg~mandt
AND mkpf~mblnr = mseg~mblnr
AND mkpf~mjahr = mseg~mjahr
FOR ALL entries IN ibew
WHERE mseg~matnr = ibew-matnr
AND mseg~werks = ibew-bwkey
AND mseg~mat_kdauf = ibew-vbeln
AND mseg~mat_kdpos = ibew-posnr.
SORT i_mseg BY mat_kdauf ASCENDING
mat_kdpos ASCENDING
budat DESCENDING.
ENDIF.
I need to select the material documents because the end users want to see the stock as on certain date for the sales orders and only material document lines can give this information. Also EBEW table gives Stock only for current date.
For Example :
If the report was run for Stock date 30th Sept 2008, but on the 5th Oct 2008, then I need to consider the goods movements after 30th Sept and add if stock was issued or subtract if stock was added.
I know there is an Index MSEG~M in database system on mseg, however I don't know the Storage location LGORT and Movement types BWART that should be considered, so I tried to use all the Storage locations and Movement types available in the system, but this caused the query to run even slower than before.
I could create an index for the fields mentioned in where clause , but it would be an overhead anyways.
Your help will be appreciated. Thanks in advance
regards,
AdvaitHi Thomas,
Thanks for your reply. the performance of the query has significantly improved than before after switching the join from mseg join mkpf.
Actually, I even tried without join and looped using field symbols ,this is working slightly faster than the switched join.
Here are the result , tried with 371 records as our sandbox doesn't have too many entries unfortunately ,
Results before switching the join 146036 microseconds
Results after swithing the join 38029 microseconds
Results w/o join 28068 microseconds for selection and 5725 microseconds for looping
Thanks again.
regards,
Advait -
Performance issue while wrapping the sql in pl/sql block
Hi All,
I am facing performance issue in a query while wrapping the sql in pl/sql block.
I have a complex view. while quering the view using
Select * from v_csp_tabs(Name of View I am using), it is taking 10 second to fetch 50,000 records.
But when I am using some conditions on the view, Like
Select * from v_csp_tabs where clientid = 500006 and programid = 1 and vendorid = 1, it is taking more then 250 secs. to return the result set.
now the weird part is this is happening only for one programID, that is 1
I am using Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
Any one please suggest what are the things i need to check..
I am sorry, I could not provide you the explain plan, because this is in production and I do not have enough prevelage.
Thank you in advance.
Thnx,
BitsBits wrote:
I have a complex view. while quering the view using
Select * from v_csp_tabs(Name of View I am using), it is taking 10 second to fetch 50,000 records.
But when I am using some conditions on the view, Like
Select * from v_csp_tabs where clientid = 500006 and programid = 1 and vendorid = 1, it is taking more then 250 secs. to return the result set.That's one problem with views - you never know how they will be used in the future, nor what performance implications variant uses can have.
>
now the weird part is this is happening only for one programID, that is 1
Any one please suggest what are the things i need to check..
I am sorry, I could not provide you the explain plan, because this is in production and I do not have enough prevelage.I understand what you are saying - I have worked at similar sites. HiddenName is correct in suggesting that you need to get execution plans but sometimes getting privileges from the DBA group is simply Not Going To Happen. Its wrong but that's the way it is. Follow through on HiddenName's suggested to get help from somebody who has the privleges needed
Post the query that view view is executing. Desk checking a query is NOT ideal but is one thing we can do.
I don't suppose you can see V$ views on production - V$SQL and V$SQL_PLAN (probably not if you can't generate plans, but its worth a thought) -
Performance issues while opening business rule
Hi,
we're working with Hyperion version 9.2.1 and we're having some performance problems while opening business rules. I analyzed the issue and found out that it has something to do with assigning access privileges to the rule.
The authorization plan looks as followed:
User A is assigned to group G1
User B is assigned to group G2
Group G1 ist assigned to Group XYZ
Group G2 ist assigned to Group XYZ
Group XYZ holds the provision "basic user" for the planning application.
Without assigning any access priviliege the business rule opens immediately.
By assigning access privilege to group G1/G2 (validate or launch) the business rule opens immediately.
By assigning access privilege to group XYZ the business rule opens after 2-5 minutes.
Has anyone an idea why this happens and how to solve this?
Kind regards
Uli
Edited by: user13110201 on 12.05.2010 04:31This has been an issue with Business Rules for quite awhile. Oracle has made steps both forward and backward in later releases than yours; and they've issued patches addressing, if not completely resolving, the problem. Things finally seem to be much better in 11.1.1.3, although YMMV.
-
Performance issue while accessing SWwLOGHIST & SWWHEAD tables
Hi,
We have applied workflow function in Purchase Order (PO) for release strategry purpose. We have two release levels.
The performance issue came while user who tried to update the existing PO in which the PO had already released. because after user amend the PO and then they pressed "SAVE" button in PO's screen, the release strategry will reset and it tried to read the SWwLOGHIST table, it took few seconds to minutes to complete the save process.
My workflow's schedule job details as below:
SWWERRE - every 20 minutes
SWWCOND - every 30 minutes
SWWDHEX - every 3minutes
Table Entries:
SWWHEAD - 6mil entries
SWwLOGHIST - 25mil entries
Should we do data archiving on the above workflow tables?
Is it only the solution?
Kindly advice,
Thanks,
Regards,
ThomasHi,
The sizes for both tables as below:-
SWWLOGHIST - 3GB
SWWWIHEAD - 2GB
I've a REORGchk_alltables in weekly basis, in DB2/DB6 (DB) do I need to manually reorg of the tables or rebuild the indexes?
You can refer the attached screenshots for both tables.
Thanks,
Regards,
Thomas -
Performance Issue while Joining two Big Tables
Hello Experts,
We've the following Scenario, wherein we need to have Sales rep associated for a Sales Order. This information is available in VBPA table with Sales Order, Sales Order Item and Partner Function being the key.
NOw I'm interested in only one Partner Function for e.g. 'ZP'. This table is having around 120 million records.
I tried both options:
Option 1 - Join this table(VBPA) with Sales Order Item table(VBAP) within the Data Foundation Layer of the Analytic View and doing the filtering on Partner Function
Option 2 - Create a Attribute View for VBPA having filtering on Partner Function and then join this Attribute View in the Logical Join Layer with the Data Foundation table.
Both these options are killing the performance.
Is there any way out to achieve this ?
Your expert opinion is greatly appreciated!!
Thanks & regards,
JomyHi,
Lars is correct. You may have to spend a little bit more time and give a bigger picture.
I have used this join. It takes about 2 to 3 seconds to execute this join for me. My data volume is less than yours.
You must be have used a left outer join when joining the attribute view (with constant filter ZP as specified in your first post) to the data foundation. Please cross check once again, as sometimes my fat finger inadvertently changed the join type and I had to go back and fix it. If this is a left outer join or a referential join, HANA does not perform the join if you are not requesting any field from the attribute view on table VBPA. This used to be a problem due to a bug in SP4 but got fixed in SP5.
However, if you have performed this join in the data foundation, it does enforce, the join even if you did not ask any fields from the VBPA table. The reason being that you have put a constant filter ZR (LIPS->VBPA join in data foundation as specified in one of your later replies).
If any measure you are selecting in the analytic view is a restricted measure or a calculated measure that needs some field from VBPA, then the join will be enforced as you would agree. This is where I had most trouble. My join itself is not bad but my business requirement to get the current value of a partner attribute on a higher level calculation view sent too much data from analytic view to calculation view.
Please send the full diagram of your model and vizplan. Also, if you are using a front end (like analysis office), please trap the SQL sent from this front end tool and include it in the message. Even a straight SQL you have used in which you have detected this performance issue will be helpful.
Ramana -
Performance issues while accessing the Confirm/Goods Services' transaction
Hello
We are using SRM 4.0 , through Enterprise Portal 7.0.
Many of our users are crippled by Performance issues when accessing the Confirm/Goods Services tab( Transaction bbpcf02).
The system simply clocks and would never show the screen.
This problem occurs for some users all the time, and some users for some time.
It's not related to the User's machine as others are able to access it fast using the same machine.
It is also not dependent on the data size (i.e.no . of confirmations created by the user).
We would like to know why only some users are suffering more pronouncedly, and why is this transaction generally slower than all others.
Any directions for finding the Probable cause will be highly rewarded.
Thanks
KedarHi Kedar,
Please go through the following OSS Notes:
Note 610805 - Performance problems in goods receipt
Note 885409 - BBPCF02: The search for confirmation and roles is slow
Note 1258830 - BBPCF02: Display/Process confirmation response time is slow
Thanks,
Pradeep -
Performance issues while query data from a table having large records
Hi all,
I have a performance issues on the queries on mtl_transaction_accounts table which has around 48,000,000 rows. One of the query is as below
SQL ID: 98pqcjwuhf0y6 Plan Hash: 3227911261
SELECT SUM (B.BASE_TRANSACTION_VALUE)
FROM
MTL_TRANSACTION_ACCOUNTS B , MTL_PARAMETERS A
WHERE A.ORGANIZATION_ID = B.ORGANIZATION_ID
AND A.ORGANIZATION_ID = :b1
AND B.REFERENCE_ACCOUNT = A.MATERIAL_ACCOUNT
AND B.TRANSACTION_DATE <= LAST_DAY (TO_DATE (:b2 , 'MON-YY' ) )
AND B.ACCOUNTING_LINE_TYPE != 15
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.02 0.05 0 0 0 0
Fetch 3 134.74 722.82 847951 1003824 0 2
total 7 134.76 722.87 847951 1003824 0 2
Misses in library cache during parse: 1
Misses in library cache during execute: 2
Optimizer mode: ALL_ROWS
Parsing user id: 193 (APPS)
Number of plan statistics captured: 1
Rows (1st) Rows (avg) Rows (max) Row Source Operation
1 1 1 SORT AGGREGATE (cr=469496 pr=397503 pw=0 time=237575841 us)
788242 788242 788242 NESTED LOOPS (cr=469496 pr=397503 pw=0 time=337519154 us cost=644 size=5920 card=160)
1 1 1 TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS (cr=2 pr=0 pw=0 time=59 us cost=1 size=10 card=1)
1 1 1 INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=1 pr=0 pw=0 time=40 us cost=0 size=0 card=1)(object id 181399)
788242 788242 788242 TABLE ACCESS BY INDEX ROWID MTL_TRANSACTION_ACCOUNTS (cr=469494 pr=397503 pw=0 time=336447304 us cost=643 size=4320 card=160)
8704356 8704356 8704356 INDEX RANGE SCAN MTL_TRANSACTION_ACCOUNTS_N3 (cr=28826 pr=28826 pw=0 time=27109752 us cost=28 size=0 card=7316)(object id 181802)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
1 SORT (AGGREGATE)
788242 NESTED LOOPS
1 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'MTL_PARAMETERS' (TABLE)
1 INDEX MODE: ANALYZED (UNIQUE SCAN) OF
'MTL_PARAMETERS_U1' (INDEX (UNIQUE))
788242 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'MTL_TRANSACTION_ACCOUNTS' (TABLE)
8704356 INDEX MODE: ANALYZED (RANGE SCAN) OF
'MTL_TRANSACTION_ACCOUNTS_N3' (INDEX)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
row cache lock 29 0.00 0.02
SQL*Net message to client 2 0.00 0.00
db file sequential read 847951 0.40 581.90
latch: object queue header operation 3 0.00 0.00
latch: gc element 14 0.00 0.00
gc cr grant 2-way 3 0.00 0.00
latch: gcs resource hash 1 0.00 0.00
SQL*Net message from client 2 0.00 0.00
gc current block 3-way 1 0.00 0.00
********************************************************************************On a 5 node rac environment the program completes in 15 hours whereas on a single node environemnt the program completes in 2 hours.
Is there any way I can improve the performance of this query?
Regards
Edited by: mhosur on Dec 10, 2012 2:41 AM
Edited by: mhosur on Dec 10, 2012 2:59 AM
Edited by: mhosur on Dec 11, 2012 10:32 PMCREATE INDEX mtl_transaction_accounts_n0
ON mtl_transaction_accounts (
transaction_date
, organization_id
, reference_account
, accounting_line_type
/:p -
Performance Issue while access reports through Infoview.
Users run reports( Webi ,Crystal ) through Infoview u2013 Accessed by Categories , while navigating between categories( we have different categories : Sales, Purchasing..Etc ) in the Infoview is fine ,it takes time to show all the reports within the categories ,which takes around 3 u2013 5 mins , but run time of the each reports are quick and as expected.
In CMS , while navigating between categories and time taken to show all the reports within the categories are fine . couldnu2019t not understand why this is happening in infoview only.
Would like to know if any one have similar issues. Is there any settings needs to made on the tomcat server or CMS or Infoview. Searched for OSS notes and found a note : 1206095 but it is for Enterprise XI release2.
Product Details :
BOE XI3.1 SP3 .
Any info will be helpfull.Hello,
- Is Tomcat installed on the same server as BOE? If not, did you use wdeploy to deploy the WAR files? https://bosap-support.wdf.sap.corp/sap/support/notes/1325139
- Do you have more than one Tomcat server, and is it fronted by a load balancer. If it is, try bypassing the load balancer and access InfoView directly without going thru the load balancer.
There is a CMS command-line options that might help you improving performances in your environment.
That switch is named -maxobjectsincache which allows you to increase the maximum number of objects that the CMS stores in its memory cache. Increaseing the number of objects reduces the number of databases calls required and can improve CMS performance.
The default value for this option is 10000 and the maximum value is 100000. Please keep in mind it is not recommended to exceed 100,000 as too many objects in memory will result in the CMS degradation. I would suggest testing with 60,000.
Regards,
Wallie -
Performance issue while saving the sales order
Hi Guys,
I am facing some problem in creation of sales order with 100 line items while i tried to save, it is taking too much of time to save the order. I had taken the trace and found that method Call M." /SAPSLL/CL_CUHD=>GET_OBJECT_PK" is taken 43.9 % of Net value. can any body tell me that what is the purpose of this method..??
CAll : Call M. /SAPSLL/CL_CUHD=>GET_OBJECT_PK
Number : 6,321
Gross: 37,042,153
Net:37,042,153
gross(%) : 43.9%
NET(%):43.9%
Program called: /SAPSLL/CL_CUHD===============CP
<< Moderator message - Please do not promise rewards>>
Thanks in advance..
Mak.
Edited by: Rob Burbank on Jun 2, 2011 9:44 AMplease run the SE30 again, first switch on everything in measurement restrictions including internal tables.
Then measure 3 (!!) times. Do you always get the same time for the gross time? Or is there some variance?
The net time should be reduced because I would assume internal tables to need some or most of the time. If so then I would guess that you have a suboptimal internal table processing, quadartic coding. -
ITunes 10.4 performance issue while tagging
After switching to the 64bit version of iTunes I experience a much worse performance when tagging songs compared to the 32bit version. Changing some values (like comments or genre) of ten songs can take a few minutes. I got a very large library of about 150,000 songs, but this doesn't explain the loss in performance because it used to work better before. Is it some kind of new ID3-format or what could slower the performance?
No suggestions, but I appear to be having the same issue: batch changing a bunch of content is suddenly taking a minute or two. I'm having flashbacks to the aged notebook I retired about a year back.
(I'm getting this with iTunes ten point four and OSX ten point six point eight... and now I need to figure out why the heck trying to type a number in Safari is suddenly sending me to different tabs! That's new...) -
Performance issue while fetching metadata from Informatica Repository
I'm working on Informatica 8.6(ETL tool) which contains its own repository to save metadata and using their Mapiing SDK API's I'm developing a java application which fetches the objects from the repository.
For this purpose by using "mapfwk.jar", i'm first connecting to the repository using RepositoryConnectionManager class and then at the time of fetching the metadata i used getFolder, getSource & getTarget functions.
Issue: Program is taking to much time for fetching the folders. The time taken by it depends on the number of metadata objects present in it,i.e. as object number increases, time increases.
Please advise how to reduce time for fetching metadata from repository.
Source Code:
#1 - Code for connecting to repository
protected static String PC_CLIENT_INSTALL_PATH = "E:\\Informatica\\PowerCenter8.6.0\\client\\bin";
protected static String TARGET_REPO_NAME = "test_rep";
protected static String REPO_SERVER_HOST = "blrdxp-nfadate";
protected static String REPO_SERVER_PORT = "6001";
protected static String ADMIN_USERNAME = "Administrator";
protected static String ADMIN_PASSWORD = "Administrator";
protected static String REPO_SERVER_DOMAIN_NAME = "Domain_blrdxp-nfadate";
protected void initializeRepositoryProps(){
CachedRepositoryConnectionManager rpMgr = new CachedRepositoryConnectionManager(new PmrepRepositoryConnectionManager());
RepoProperties repoProp = new RepoProperties();
repoProp.setProperty(RepoPropsConstant.PC_CLIENT_INSTALL_PATH, PC_CLIENT_INSTALL_PATH);
repoProp.setProperty(RepoPropsConstant.TARGET_REPO_NAME, TARGET_REPO_NAME);
repoProp.setProperty(RepoPropsConstant.REPO_SERVER_DOMAIN_NAME, REPO_SERVER_DOMAIN_NAME);
repoProp.setProperty(RepoPropsConstant.REPO_SERVER_HOST, REPO_SERVER_HOST);
repoProp.setProperty(RepoPropsConstant.REPO_SERVER_PORT, REPO_SERVER_PORT);
repoProp.setProperty(RepoPropsConstant.ADMIN_USERNAME, ADMIN_USERNAME);
repoProp.setProperty(RepoPropsConstant.ADMIN_PASSWORD, ADMIN_PASSWORD);
rep.setProperties(repoProp);
rep.setRepositoryConnectionManager(rpMgr);
}#2 - Code for fetching metadata
Vector<Folder> rep_FetchedFolders = new Vector<Folder>();
public void fetchRepositoryFolders(){
initializeRepositoryProps();
System.out.println("Repository Properties set");
//To fetch Folder
Vector<Folder> folders = new Vector<Folder>();
folders = (Vector<Folder>)rep.getFolder();
for(int i=1 ; i < folders.size(); i++){
Folder t_folder = new Folder();
t_folder.setName(((Folder)folders.get(i)).getName());
Vector listOfSources = ((Folder)folders.get((i))).getSource();
//To fetch Sources from folder
for(int b=0; b<listOfSources.size();b++){
Source src = ((Source)listOfSources.get(b));
t_folder.addSource(src);
Vector listOfTargets = ((Folder)folders.get((i))).getTarget();
//To fetch Sources from folder
for(int b=0; b<listOfTargets.size();b++){
Target trg = ((Target)listOfTargets.get(b));
t_folder.addTarget(trg);
rep_FetchedFolders.addElement(t_folder);
}Hi neel007,
Just use a List instead of a Vector, it's more performant :
List<Folder> rep_FetchedFolders = new ArrayList<Folder>();If you need to synchronize your list, then
List<Folder> rep_FetchedFolders = Collections.synchronizedList(new ArrayList<Folder>());Also, if you're using Java 5 or higher and if you're sure listOfTargets contains only Target objects, instead of this
for(int b=0; b<listOfTargets.size();b++){
Target trg = ((Target)listOfTargets.get(b));
t_folder.addTarget(trg);
}you may do this :
for (Target trg : listOfTargets) {
t_folder.addTarget(trg);
}Edited by: Chicon on May 18, 2009 7:29 AM
Edited by: Chicon on May 18, 2009 7:35 AM
Maybe you are looking for
-
Hi, I have an iPod touch and am thinking of getting an iPhone to use as well, will I be able to use them both from the same computer or will there be some problems?
-
Project could not be prepared for publishing. Error (-2125)
I imported movies from my camcorder (Sony digital 8) and when I tried to share/media browser/publish, or Share/i DVD, after hours, a window appears with a message "The project could not be prepared for publishing because an error ocurred (-2125)". Is
-
iPod touch 2G. Not syncing, not restoring, not switching on. iTunes 10.4.1. Help Please!
-
In file adapater how to divide the o/p to come on 2 lines from a single lin
Hi , Currently my o/p is coming like 2006/03/17,9:32:10,U123577,000000000000100338,24043350,X,Y,S,06,017,UL Int. PO-FGDR U created under the number U123577 i would like to break my output as 2006/03/17,9:32:10,U123577,000000000000100338,24043350,X, Y
-
Hi All, I am using SAP CRM-5.0 . I have some requirement in IC WebClient inbox. After searching for a valid account I am confirming the same. Then I am opening the inbox. After that I want to show the Interaction Record for that account only. Is i