Date of adv against po or performa invo. or dispatch doc and mat receipt
Is it possible to generate a report so as to track down all the material receipt made against particular advances made for different line items of p.o
or to know whether goods have been received wrt advance payments made within specific time and if not then how much overdue it is .
what would be the common field between advance payments and mat receipt doc at plants so as to make such report
Edited by: ca sanjeev mehndiratta on Oct 28, 2009 8:27 AM
over
Similar Messages
-
Data in table is huge so performance is reduced
Hi all,
I have a tables in the oracle database where the number of records in the table is huge ie 6 years of data,due to which when i generate the report the retrieval of data is slow.
How to increase the performance,so that the report can be generate fast.
Thanks in advanceThank you for polluting TWO forums with the exact same request.
Please stick to the thread with answers in it.
Data in table is huge so performance is reduced -
When I send out mail from MS Outlook enterprise account in the office to my Mac at home they are received as "winmail.dat" files. Even if I perform a "save as" to the correct file name the file format is still not recognized. Why is this happening!?
http://www.joshjacob.com/mac-development/tnef.php
-
Data Mining Reports against a Cube using Excel
I'm quite new to Data Mining so please bear with me if these are silly questions.
1. Can you create a data mining model against a Cube from within Excel? I seem to only be able to target a relational database.
2. Once you have a DM model in place and it's run against data in Excel... can these be published to Sharepoint and automatically updated? So for instance, you have 10,000 rows of data in Excel that have been categorised... you want this updated on a weekly
basis so that the categories are updated as the data in the rows changes - someone might change category based on their usage of "something" for instance.
Thanks,
MarcusHello Markus,
1. No, to modify the SSAS database, which contains the Data Mining model, you Need BIDS (Business Intelligence Developer Studio) for the Version 2005-2008R2 or SSDT (SQL Server Data Tools) for Version 2012-2014.
You could use the Data Mining Addin for Excel to create local data mining models, see
http://office.microsoft.com/en-us/excel-help/data-mining-add-ins-HA010342915.aspx
Olaf Helper
[ Blog] [ Xing] [ MVP] -
Hide particular data in the tasks list in performance management
Hi,
I'm trying to hide transfer scorecard task from task list in performance management page in Manager self service.
I tried by extending that view object and I added where clause additionally to hide that row. But i got sqlexception error.
So i revert back the vo extension and i tried by extending the controller and added where clause after
super.processRequest(oapagecontext, webbean);
<added whereclause to hide particular record>
but still I'm getting that sqlexception error.
Any suggestions please.
Thanks in advance,
SANYou can do that by simply sorting the alv display by <b>object type</b>, all the data after that will be group according to its object type and the object type will only be printed once.
You can make a varian inthe ALV and call it on your code, or insert the code with this (if you use function module for creating ALV):
information for sort and subtotals
types: begin of slis_sortinfo_alv,
spos(2) type n,
spos like alvdynp-sortpos,
fieldname type slis_fieldname,
tabname type slis_fieldname,
up(1) type c,
down(1) type c,
group(2) type c,
subtot(1) type c,
up like alvdynp-sortup,
down like alvdynp-sortdown,
group like alvdynp-grouplevel,
subtot like alvdynp-subtotals,
comp(1) type c,
expa(1) type c,
obligatory(1) type c,
end of slis_sortinfo_alv.
DATA: l_v_sort TYPE slis_sortinfo_alv.
REFRESH: ta_fieldcat, ta_sort.
v_layout-zebra = 'X'.
v_layout-colwidth_optimize = 'X'.
l_v_sort-fieldname = 'OBJECTTYPE'.
l_v_sort-spos = 1.
l_v_sort-up = 'X'.
l_v_sort-subtot = 'X'.
APPEND l_v_sort TO ta_sort.
CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
EXPORTING
i_callback_program = v_repid
is_layout = v_layout
it_fieldcat = ta_fieldcat[]
<b>it_sort = ta_sort[]</b>
it_events = ta_events[]
is_sel_hide = wa_selcrit
i_save = 'A'
is_variant = spec_layout
TABLES
t_outtab = ft_output
EXCEPTIONS
program_error = 1
OTHERS = 2.
IF sy-subrc NE 0.
EXIT.
ENDIF.
Regards,
-don- -
RE: Case 59063: performance issues w/ C TLIB and Forte3M
Hi James,
Could you give me a call, I am at my desk.
I had meetings all day and couldn't respond to your calls earlier.
-----Original Message-----
From: James Min [mailto:jminbrio.forte.com]
Sent: Thursday, March 30, 2000 2:50 PM
To: Sharma, Sandeep; Pyatetskiy, Alexander
Cc: sophiaforte.com; kenlforte.com; Tenerelli, Mike
Subject: Re: Case 59063: performance issues w/ C TLIB and Forte 3M
Hello,
I just want to reiterate that we are very committed to working on
this issue, and that our goal is to find out the root of the problem. But
first I'd like to narrow down the avenues by process of elimination.
Open Cursor is something that is commonly used in today's RDBMS. I
know that you must test your query in ISQL using some kind of execute
immediate, but Sybase should be able to handle an open cursor. I was
wondering if your Sybase expert commented on the fact that the server is
not responding to commonly used command like 'open cursor'. According to
our developer, we are merely following the API from Sybase, and open cursor
is not something that particularly slows down a query for several minutes
(except maybe the very first time). The logs show that Forte is waiting for
a status from the DB server. Actually, using prepared statements and open
cursor ends up being more efficient in the long run.
Some questions:
1) Have you tried to do a prepared statement with open cursor in your ISQL
session? If so, did it have the same slowness?
2) How big is the table you are querying? How many rows are there? How many
are returned?
3) When there is a hang in Forte, is there disk-spinning or CPU usage in
the database server side? On the Forte side? Absolutely no activity at all?
We actually have a Sybase set-up here, and if you wish, we could test out
your database and Forte PEX here. Since your queries seems to be running
off of only one table, this might be the best option, as we could look at
everything here, in house. To do this:
a) BCP out the data into a flat file. (character format to make it portable)
b) we need a script to create the table and indexes.
c) the Forte PEX file of the app to test this out.
d) the SQL staement that you issue in ISQL for comparison.
If the situation warrants, we can give a concrete example of
possible errors/bugs to a developer. Dial-in is still an option, but to be
able to look at the TOOL code, database setup, etc. without the limitations
of dial-up may be faster and more efficient. Please let me know if you can
provide this, as well as the answers to the above questions, or if you have
any questions.
Regards,
At 08:05 AM 3/30/00 -0500, Sharma, Sandeep wrote:
James, Ken:
FYI, see attached response from our Sybase expert, Dani Sasmita. She has
already tried what you suggested and results are enclosed.
++
Sandeep
-----Original Message-----
From: SASMITA, DANIAR
Sent: Wednesday, March 29, 2000 6:43 PM
To: Pyatetskiy, Alexander
Cc: Sharma, Sandeep; Tenerelli, Mike
Subject: Re: FW: Case 59063: Select using LIKE has performance
issues
w/ CTLIB and Forte 3M
We did that trick already.
When it is hanging, I can see what is doing.
It is doing OPEN CURSOR. But not clear the exact statement of the cursor
it is trying to open.
When we run the query directly to Sybase, not using Forte, it is clearly
not opening any cursor.
And running it directly to Sybase many times, the response is always
consistently fast.
It is just when the query runs from Forte to Sybase, it opens a cursor.
But again, in the Forte code, Alex is not using any cursor.
In trying to capture the query,we even tried to audit any statementcoming
to Sybase. Same thing, just open cursor. No cursor declaration anywhere.==============================================
James Min
Technical Support Engineer - Forte Tools
Sun Microsystems, Inc.
1800 Harrison St., 17th Fl.
Oakland, CA 94612
james.minsun.com
510.869.2056
==============================================
Support Hotline: 510-451-5400
CUSTOMERS open a NEW CASE with Technical Support:
http://www.forte.com/support/case_entry.html
CUSTOMERS view your cases and enter follow-up transactions:
http://www.forte.com/support/view_calls.htmlEarthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701 -
GR/IR Clearing When to perform F.13/F.19 and MR11
Dear,
Can you pls explain GR/IR Clearing and its grouping, whats the process for GR/IR
adjustments done?
What is Grouping and Regroping?
When to perform F.13/F.19 and MR11?
I have read the threads discussed here on this topic, still I do not understand.
Thanks All
KrishnaHi
When we execute SAP124 (F.13) with the flag "only documents which can be cleared" it will display in detailed list only those documents that can be cleared.
If dont check this field it will generate a list even of documents that cannot be cleared.
In this case in OB74 you have given ASSIGNMENT and INVOICE REFERENCE as the criteria.
But what is probably happening is that the ASSIGNMENT is not matching in these documents and thus the system does not find
the item suitable or rather eligible for clearing.
If you check the documentation relative to the "Special processing of GR/IR clearing accounts" in SE38 -> SAPF124, you will find the following information:
"If you set the GR/IR accounts special processing indicator: The program then automatically uses the EBELN and EBELP fields as well a as the XREF3 reference field as grouping criteria.
This means that documents with the most recent posting date are initially ignored."
So the system is checking these three fields for grouping the documents.
For clearing GR/IR account it is not obligatory to set indicator GR/IR Account Special Processing. As described in note 546410 it can be more effective not use this indicator.
Also read note 546410 and 574482 thoroughly. -
sync performed on new iPhone 4, and ALL my new info (contacts, pics, etc) GONE!! It went back to my old iPhone info... how do I get my iPhone 4 info back?? The look and everything about the phone is like I have the older version!!! I'm sure the data was backed up... but where, and how do I retrieve!!???
If you don't have the prior computer backup files, you may want to look at this to get your information from the phone:
http://www.wideanglesoftware.com/touchcopy/index.php -
I started getting a Timed Out error message on my iPhone 5 when trying to browse the internet tthat reads Page could not be loaded. Connection timed out. I performed a Reset Network Settings and am still getting the error message. What is the fix?
Thanks so much for more direction.
I realized that I bought this iPod T4 about a year ago, so decided to check the date and it is still under the 1 year ltd. warranty until 21 Dec. I contacted Apple via Chat and was informed that the problem may lie in the version of iTunes I was running 10.7. Today Apple released v. 11 of iTunes which I was instructed to download. It's TOTALLY different from iTunes 10.7 - drastically different in fact.
The Apple Chat Rep set up a phone call appointment forl 8:00 pm ET tonight where another Rep will call me to make sure the restore was accomplished OK. With v. 11 of iTunes I can't even find the restore option!!! The phone Rep will have to help me navigate v. 11 of iTunes. It doesn't seem intuitive at all. Maybe it's just me?!?!
Thank you for sending this new link for firmware. It's not clear to me right now whether I'll need it or not. The phone Rep will probably somehow be able to resolve the problem. If not, they'll probably send me to the nearest Apple store if the problem is hardware, as it would be covered under the warranty. I will give another update as to what transpires as a result of the phone call with the Apple Rep.
Again, thank you so much for helping me. -
[8i] Performance difference between a view and an in-line view?
I have a query with a few 'UNION ALL' statements... each chunk of the query that is joined by the 'UNION ALL' references the same in-line view, but in each chunk it is joined to different tables. If I actually create the view and reference it in each chunk, will it still run the query behind the view for each chunk, or will it only do it once? I just want to know if it will improve the performance of my query. And, I'm not talking about creating a materialized view, just a regular one.
Because of the complexity of my query, I tried out a simple (really simple) example instead...
First, I created my simple view
Then, I ran a query with a UNION ALL in it against that view
Next, I ran the same UNION ALL query, but using in-line views instead of the one I created, and these are the results I got:
(against the view I created)
890 rows selected.
Execution Plan
0 SELECT STATEMENT Optimizer=RULE
1 0 UNION-ALL
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'PART'
3 2 INDEX (RANGE SCAN) OF 'PART_PK' (UNIQUE)
4 1 TABLE ACCESS (BY INDEX ROWID) OF 'PART'
5 4 INDEX (RANGE SCAN) OF 'PART_PK' (UNIQUE)
Statistics
14 recursive calls
0 db block gets
1080 consistent gets
583 physical reads
0 redo size
54543 bytes sent via SQL*Net to client
4559 bytes received via SQL*Net from client
61 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
890 rows processed
timing for: query_timer
Elapsed: 00:00:01.67(with in-line views)
890 rows selected.
Execution Plan
0 SELECT STATEMENT Optimizer=RULE
1 0 UNION-ALL
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'PART'
3 2 INDEX (RANGE SCAN) OF 'PART_PK' (UNIQUE)
4 1 TABLE ACCESS (BY INDEX ROWID) OF 'PART'
5 4 INDEX (RANGE SCAN) OF 'PART_PK' (UNIQUE)
Statistics
0 recursive calls
0 db block gets
1076 consistent gets
582 physical reads
0 redo size
54543 bytes sent via SQL*Net to client
4559 bytes received via SQL*Net from client
61 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
890 rows processed
timing for: query_timer
Elapsed: 00:00:00.70Here, it appears that the explain plans are the same, though the statistics and time show better performance with using in-line views...
Next, I tried the same 2 queries, but using the CHOOSE hint, since the explain plans above show that it defaults to using the RBO...
Here are those results:
(hint + use view)
890 rows selected.
Execution Plan
0 SELECT STATEMENT Optimizer=HINT: CHOOSE (Cost=1840 Card=1071
Bytes=57834)
1 0 UNION-ALL
2 1 TABLE ACCESS (FULL) OF 'PART' (Cost=920 Card=642 Bytes=3
4668)
3 1 TABLE ACCESS (FULL) OF 'PART' (Cost=920 Card=429 Bytes=2
3166)
Statistics
14 recursive calls
8 db block gets
12371 consistent gets
10850 physical reads
0 redo size
60726 bytes sent via SQL*Net to client
4441 bytes received via SQL*Net from client
61 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
890 rows processed
timing for: query_timer
Elapsed: 00:00:02.90(hint + in-line view)
890 rows selected.
Execution Plan
0 SELECT STATEMENT Optimizer=HINT: CHOOSE (Cost=1840 Card=1071
Bytes=57834)
1 0 UNION-ALL
2 1 TABLE ACCESS (FULL) OF 'PART' (Cost=920 Card=642 Bytes=3
4668)
3 1 TABLE ACCESS (FULL) OF 'PART' (Cost=920 Card=429 Bytes=2
3166)
Statistics
0 recursive calls
8 db block gets
12367 consistent gets
10850 physical reads
0 redo size
60726 bytes sent via SQL*Net to client
4441 bytes received via SQL*Net from client
61 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
890 rows processed
timing for: query_timer
Elapsed: 00:00:02.99Obviously, for this simple example, using the CHOOSE hint caused worse performance than letting it default to the RBO (though the explain plans still look the same to me), but what I find interesting is that when I used the hint, the version of the query using the in-line view became at least equivalent to the one using the view, if not worse.
But, based on these results, I don't know that I can extrapolate to my complex query... or can I? I'm thinking I'm going to have to actually go through and make my views for the complex query and test it out.... -
Performance comparison between using sql and pl/sql for same purpose
Hi All,
I have to do some huge inserts into a table from some other tables. I have 2 option:
Option 1
======
a. Declare a cusor for a query involving all source tables, this will return the data to be populated into target
b. Use a cursor for loop to loop through all the records in the cursor
c. for each iteration of the loop, populate target columns, do any calculations/function calls required to populate derived columns, and then insert the resulting record into target table
Option 2
======
Just write a big single "Insert Into ..... Select ..." statement, doing alll calculations/funtion calls in the select statement generating the source data.
Now my question is ; which option is fast? and why. This operation is performace critical so I need the option which will run faster. Can anybody help???
Thanks in Advance.user9314072 wrote:
while the above comments are vaild, you should concider maintainability in you code. Even if you can write the sql it might be the code becomes complex making tuning very dificult, and derade performance.Beg to differ on that. Regardless of complexity of code, SQL is always faster than PL/SQL when dealing with SQL data. The reason for that is that PL/SQL still needs to use SQL anyway for row retrieval, and in addition it needs to copy row data from the buffer cache into the PL/SQL PGA. This is an overhead that does not exist in SQL.
So if you are processing a 100 million rows with a complex 100 line SQL statement, versus a 100 million rows 100 line PL/SQL procedure, SQL will always be faster.
It is a trade off, my experiance is large SQL's 100's lines long become hard to manage. You need to ask yourself why there are 100's of line of SQL. This points to an underlying problem. A flaky data model is very likely the cause. Or not using SQL correctly. Many times a 100 line SQL can be changed to a 10 liner by introducing different logic that solves the exact same problem easier and faster (e.g. using analytical SQL, thinking "+out-of-the-box+").
Also, 100's of line of SQL points to a performance issue always. And it does not matter where you move this code logic to PL/SQL or Java or elsewhere, the performance problem will remain. Moving the problem from SQL to PL/SQL or Java does not reduce the number of rows to process, or make a significant change in the number of CPU instructions to be executed. And there's the above overhead mentioned - pulling SQL data into a client memory segment for processing (an overhead that does not exist using SQL).
So how do you address this then? Assuming the data model is correct, then there are 2 primary methods to address the 100's of SQL lines and its associated performance problem.
Modularise the SQL. Make the 100's of lines easier to maintain and understand. This can be done using VIEWS and the SQL WITH clause.
As for the associated performance issue - materialised views comes to mind as an excellent method to address this type of problem.
my advice is keep things simple, because soon or later you will need to change the code.I'm all for that - but introducing more moving parts like PL/SQL or Java and ref cursors and bulk fetching and so on.. how does that reduce complexity?
SQL is the first and best place to solve row crunching problems. Do not be fooled into thinking that you can achieve that same performance using PL/SQL or Java. -
have family plan with 250 data which I almost use each month. Going on vacation and will be on the road for two weeks. Should I up my data for a month then change back. Is it worth it or should I just run over and pay the extra 15 per gig?
Hello mlazaretti. Vacation time is awesome. (Especially a road trip!) Since you will be going out for two weeks, you never know if having extra data may come in handy. I highly recommend switching to the next tier up so this way you have more data. This way it is only $10.00 more versus $15.00, and you dont have to worry about overages. Then change back at the start of the next billing cycle.
If you need help making this change let us know! Have a safe trip!
NicandroN_VZW
Follow us on twitter @VZWSupport -
When I pre ordered my iphone 6 i was told I was going to get an additional 1 gig of data per month for a year. It even shows it on the receipt i received with the phone. When will I see that reflected on my account. I have had my new phone for a week and used tons more data than usual and am hoping that will save me this month.
concretedonkey, I'm glad you were able to take advantage of this offer when you ordered your new iPhone 6. I can certainly review your account to ensure this was added for you. Please reply to the direct message I have sent you.
AndreaS_VZW
Follow us on Twitter @VZWSupport -
How to improve the query performance in to report level and designer level
How to improve the query performance in to report level and designer level......?
Plz let me know the detail view......first its all based on the design of the database, universe and the report.
at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
and when you create a paremeter try to get it match with the key fields in the database.
good luck
Amr -
hello there everyone
I lost everything cause I wanted to update my phone
here is what happened with me exactly :-
I plugged in my phone into the charger & connected my phone to a wireless connection to start the operation of updating it, I immediately noticed that I dont have the required space to proceed the download so i had to delete data manually, & meanwhile i was doing so my phone became slow and it got stuck then it went off after that it came on again but it was on the recovery mode ??
did I lose everything ??? what did I do wrong ??? what can i do to recover this ??Hi, Milanista5
Thank you for visiting Apple Support Communities.
When experiencing issues restoring or updating an iPhone, here are the best articles to go through. If you received a specific error number when restoring, see the section labeled Get more help in the second article below.
iOS: Unable to update or restore
http://support.apple.com/kb/ht1808
iOS: Troubleshooting update and restore issues
http://support.apple.com/kb/ts1275
Cheers,
Jason H.
Maybe you are looking for
-
Office 2011 suddenly error message...
Hi all, suddenly I got the following error message when I want to start any of the office programms like Word, Excel and Outlook... Maybe somebody has an idea... I just installed it totaly new two times but the error comes up again.... Thanks! Proces
-
I am a TB newbie, and did something unintentional: while trying to add one folder to the subscription list, I touched some key --- and there they all go. Cannot tell what happened. The panel subscribe still shows checks for the 40 or so I wanted to d
-
SQL Server 2008 R2 { An Error occured when attaching database(s) }
Hello Guys! I just installed SQL Server 2008 R2 a couple of days ago, on the first day use i can attach database(.mdf) with out any problem. on the day 2, i keep getting this error: http://img515.imageshack.us/img515/14/1212i.png TITLE: Microsoft SQL
-
Every time I plug my iPhone 5s (32 GB) into my macbook pro, iTunes says: The iPhone "iPhone" cannot be synced. An unknown error occurred (1723). And none of my recently bought songs or my ringtones will transfer to my iPhone. I have iOS 8.1.2. I have
-
Hi Guys, I am trying to import the credit card data using the option "Import documents also, if any" but it is not working. The transaction is added to trip. In the transaction is described :The credit-card transaction is imported to the trip if view