Bittorent approach
Hello,
i would like to implement somekind of parallell download with java.. maybe using multiple threads?
I am thinking about algorithm about how to split any media files in pieces(maybe of fixed length) and uniquely identify each piece. Then a way for a computer to retrieve each piece.
( i am dealing in Peer To Peer).
i would be happy if we could start a brainstorming in this thread , this would help me a lot for my school project.
thanks a lot
sebastien
thank you for your message. and sorry.
i made stupidely a triple posts thinking that i could et more feedbacks that way. You are right. I needs to come up first with my own solutions. I had already have look to bittorrent specs and understood the main ideas but cannot really see how to do this in Java. I downloaded the source code of the project Azureus and will try to study it. But their source code seems to not have much comments unfortunately.
i will get back to this thread as soon as i came up with a more practical solutions,
sorry again and i am not lazy , i am just a student with a complex problem to solve.
regards
sebastien
Similar Messages
-
Follow Up...Vocal Mixing Approach
I've been kicking around possibilities in a different thread with some very helpful people
http://discussions.apple.com/thread.jspa?threadID=2386425&tstart=0
Now, I'd like to implement what AT THIS POINT, I think may be my best way to approach the issue of getting my vocal mixes good.
First, I am going to go back to using my mic the way I had in previous recordings. That is, using the large windshield that came with my SM7, that helped with a lot of plosives etc. Also, I will apply the bass roll off and NOT the presence boost. Presence boost seemed to give too harsh highs. I can EQ later if need be. Flat will be easier to start with. As far as pre tracking, this is about all i can do, since I know my recording levels and all are good, and i don't have any other equipment besides the SM7 and the DUET.
Next, I would like to know y'alls take on "NORMALIZE". I usually have not ever normalized my vocal tracks. The last one I had issues with a lot, I actually DID normalize, So, perhaps I WON'T in the future. I'm gonna open up a meter for monitoring. Prolly the Multimeter. I'm going to go in and for the sake of time, only do some manual edits to the volume envelope for the REAL bad guys, those that stand out clearly in my ears. Maybe do a quick check for anything that is REAL quiet too, boost that a little manually. Problem is I don't know how much to boost or lower. Can I see with the meter what the overall average db is, and see if I have a loud part that hits, say -6db and the average is around -12db, so I'd just lower that part by 6db?? and vice versa for quieter parts.
Now, the fader is controlled by the volume envelope so if I do automation I will lose control of the fader for my levels, so should I insert GAIN plug or just use an output gain on one of the inserts I'm gonna use anyways. Either way, at this point I will set the relative levels. Just a quick setting to start off with.
Now I believe I can apply this plugin
http://sounds.wa.com/audiounits.html
Thanks Ericksimon!!
Looks like a nice tool, similar to what I was looking for but not too cumbersome. I have not gotten a chance to try it out or see how to use it properly.
So at this point, I think I should have a really steady volume level going on.
Now I am not sure about the whole compression thing. I have come across several options. First though, do I want to do any EQing at this point??? before I add compression or after?? There are two points I would want to use EQ, one is to tame any out of control frequencies and 2 would be to shape the overall tone of the vocal. I suppose I could EQ for any bad frequencies first, then do compression and whatnot, then EQ at the end to slightly shape the sound if need be.
Anyways, compression. The Multipressor worries me a bit. I've used it with nice results on instruments and sampled audio, but not on vocals. Also, the downward expansion looks cool. I don't usually have any problem with too much noise, but I guess that's a problem i just never knew I had!!! I just try to track with as little gain as possible while still achieving a usable recording level. Then I get increase the gain later without having a lot of noise on the recorded track. Also, the multipressor would address my Highs in my voice jumping out at times. If I could just compress the Highs and not so much the lower frequencies. That might give me better results than JUST compression the whole spectrum equally.
The OTHER option, maybe, would be to use the DeEsser. I can compress a specific frequency range, but I don't think it has multiple bands. So, I couldn't really get any compression going on for the other bands. But I was thinking about the DeEsser in tandom with a Noise Gate instead of the Mulitpressor. This may be a little easier, but I think I might want to have a full spectrum compression, but be able to compress certain "trouble" frequencies more or less.
So I think I should be done at this point as far as levels and all that go, perhaps if I was to use EQ to color my track a bit, but that is gonna fall in with applying REverb and Delay or whatever else I want to add flavor. WHICH IS A WHOLE nother thread. After mixing about 13 tracks in logic now, I still can't find a setting that I'm comfortable with for my voice. Every recording seems different so every time i have to do something quite different. And every time i feel like it's the first time!!!
Anyways, how does my approach at this point look??
Holy crap,.....I didn't realize I had written that much. My god, If anyone replies you are truly a saint.Bee Jay wrote:
Next, I would like to know y'alls take on "NORMALIZE".
Never do it. Absolutely pointless.
I think if it doesn't hurt anything I WILL because I think it helps me when doing manual edits so that I have a point of reference that is somewhat consistent throughout multiple audio regions
Now, the fader is controlled by the volume envelope so if I do automation I will lose control of the fader for my levels,
This is a good point - in general, send the output of a channel to a bus/aux channel. You can now automate one, and use the fader on the other to offset the levels - doesn't really matter which, although I prefer to automate the channel fader to smooth out the signal before hitting the aux wher the compressor is.
I am not real familiar with aux channels, I've tried using them but didn't understand the send amounts and how it was affecting the signal. If I put the send amount at 100 does 100% of the signal get passed to the fader then. So in your example, I would definitiely want to set at 100. As of now I simply used the gain control on the compressor or whatever plugin was last in the chain, I think that should not be any different in the sound....
The Multipressor worries me a bit. I've used it with nice results on instruments and sampled audio, but not on vocals
I wouldn't use it unless there is a definite reason to use it. At this point, you don't sound sure why you are using, and thus are unlikely to use it effectively. So don't over-complicate and use something simpler - there are plenty of good sounding compressors around, if you don't like Logic's.
well what I did is looked at the problem frequencies that I had corrective EQ'd and made that one band,,,,from say 700-1200, then made the other two bands as 0-700 and 1200-18000 or whatever. I didn't touch the low band, but I applied compression slightly different for each one of the other bands. I think it turned out much better than I expected, but I really don't know that I needed much compression anyways......the manual automation, rider plug and EQing had the signal going to the compressor pretty good already. It's amazing how much better a compressor sounds when the signal coming in isn't pure garbage!!!
I just try to track with as little gain as possible while still achieving a usable recording level.
What levels do you record at?
I believe the settings on my Duet are at about 56-58 usually when I record. This puts the input signal usually bouncing around between -24 to -12 db. I heard that was a good range and leaves lots of headroom. I use Ind. Monitoring level to crank it up so I can record at this level, otherwise it's too quiet to hear over the beat. I like to perform with lots of volume. These settings seem to work well, and NO i don't have a noise problem. Actually after alll the compression and everything was said and done. The "silent" parts between phrases or words were actually pretty much dead silent. So that seems to tell me my gainstaging is correct?
Also, the multipressor would address my Highs in my voice jumping out at times. If I could just compress the Highs and not so much the lower frequencies. That might give me better results than JUST compression the whole spectrum equally.
Maybe, but I still think it's overcomplicating things, although it's hard to say without knowing the recordings. if you have particular sections which are harsh, the common technique employed by the big guys is to split the vocal ("mult") into multiple channels for the appropriate parts, so you can EQ and treat the parts separately. Some producers will even go so far as to automate or mult individual syllables in a vocal phrase, although that is extreme.
yeah, haha, It already took something like 4 hours last night to do what I did. I broke down and manually automated nearly every phrase/syllable in the song!!! Boy, that was work. But, the results were quite apparent. Even taming the out of control stuff, it gave life to the stuff that used to get kind of lost in the mix. I been reading the manual to get some easier shortcuts and stuff for editing automation, so that should cut some time a bit. Also, I'm not gonna be recording much more backing tracks I don't think. If I do, I will probably just copy the automation from the first "Lead" track to the backing track. I have gotten a little more comfortable using reverb to help fatten up my vocals.
So I used manual automation edits. Rider Plugin. Corrective EQing. Multipressor. "Color" Eqing. And Finally SilverVerb.
Sounded much better than before. And really I only spent about an hour or 2 more than what I had originally spent battling with plugins that couldn't correct a bad input. So good in good out and you know the rest. And like I said. I accidentally erased most of the automation I did, then saved then quit. Didn't realize I did that. Oh well. I'm learning -
PI 7.11 mapping lookup - data enrichment - appropriate approach?
Hi guys,
we just upgraded from PI 7.0 to PI 7.11.
Now I´m facing a new scenario where an incoming order have to be processed.
(HTTP to RFC)
Furthermore each item of the order have to be enriched by data looked up in a SAP ERP 6.0 system.
the lookup functionality could be accessed during RFC or ABAP Proxy
With the new PI release we have several possibilities to implement this scenario, which are ...
(1) graphical RFC Lookup in message mapping
(2) ccBPM
(3) using of the lookup API in java mapping
(4) message mapping RFC Lookup in a UDF
Because of performance reason I prefer to make use of the Advanced Adapter Engine, if this is possible.
Further there should only one lookup request for all items of the order instead of each order item.
I tried to implement possiblity (1), but it seems to be hard to fill the request table structure of the RFC function module. All examples in SDN only uses simple (single) input parameters instead of tables. Parsing the result table of the RFC seems to be tricky as well.
Afterwards I tried to implement approach (3) using an SOAP adapter as Proxy with the protocol XI 3.0.
(new functionality in PI 7.11)
But this ends up in a crazy error message so it seems that SOAP adapter could not used as proxy adapter in this case.
ccBPM seems also be an good and transparent approach, because there is no need of complex java code or lookup api.
So the choice is not so easy.
What´s the best approach for this scenario??
Are my notes to the approach correct or do I use/interpret it wrong?
Any help, ideas appreciated
Kind regards
JochenHi,
the error while trying to use the soap channel for proxy communication is ....
com.sap.aii.mapping.lookup.LookupException: Exception during processing the payload. Error when calling an adapter by using the communication channel SOAP_RCV_QMD_100_Proxy (Party: , Service: SAP_QMD_MDT100_BS, Object ID: 579b14b4c36c3ca281f634e20b4dcf78) XI AF API call failed. Module exception: 'com.sap.engine.interfaces.messaging.api.exception.MessagingException: java.io.IOException: Unexpected length of element <sap:Error><sap:Code> = XIProxy; HTTP 200 OK'. Cause Exception: 'java.io.IOException: Unexpected length of element <sap:Error><sap:Code> = XIProxy; HTTP 200 OK'.
so this feature seems not to work for soap lookups, isn´t it.
Kind regards
Jochen -
General Approach to Web Application...
Flex is nice. I like it. Build a whole website in Flex and
pull information from XML or ColdFusion. Make calls without leaving
the page. Nice.
But...
From a user point of view, it is nice to be able to send an
URL to a friend for them to click on to get to a particular "page"
within a Flex-app. Can that even be done in Flex without starting
from the homepage?
Or an even simpler thing. How about creating a link from one
"page" to another "page" within a flex app. Can that be done? I'll
give you a more descriptive example if you don't know what I mean.
Let's say you are nytimes.com and you have lots of articles, and
one arbitrary article/record mentions Adobe. It is custom to link
to Adobe's stock quotes from within an article. In html, that would
be pretty simple to achieve. How does Flex do it?
Those are two simple examples that are easy to achieve in
html which connects the www. Search-robots requires links, but I
don't know about robots going through flash-files.
What kind of work-arounds or approaches or measurements
should I take to build flex-applications that can deal with
every-day internet tasks. To always start at a home-page is not
very cool. It is as annoying as describing to people how to click
their way through a website with framesets.
Please feel free to give me any of your input,
///johanYou have a lot of options. I'll try to answer your questions
in the order you've asked them.
Flex has a history mechanism (the History Manager) associated
with the Flex navigation containers: Accordion, TabNavigator,
ViewStack, etc. If you use those controls to move from section to
section of your application, you can grab the URL in the address
bar and you should be able to give someone that URL.
However, you have to make sure your application does all of
its initialization before jumping to a particular section. For
example, if your application requires data to be loaded first,
you'll need to do that before anyone can access other parts of the
Flex application. Most people use Flex to write applications and
not just web sites that can be done in HTML. Note that we saw "Flex
application" and not "Flex site". So that's something to consider.
The navigators (eg, ViewStack) are commonly used to give the
application "pages". The ViewStack, for example, only shows one of
its children at a time. By changing the ViewStack's selectedIndex
you change the children. You need to set up a Button or LinkButton
control (or anything else you can imagine) whose event handler
changes the ViewStack's selectedIndex.
The Flash Player can only display a handful of HTML tags.
Check the Flex 2 documentation for the specifics. But the
bottom-line is that you won't normally be able to include an
article from another web site in the middle of the Flex
application.
The Flash Player 9 has the ability to include special
metadata that search engines are supposed to know how to extract.
Again, consult the documentation.
I think if you go through the tutorials and examples, as well
as experiment on your own, you should get a better feel for what
Flex is all about. -
Hi,
We had done technical migration of value based roles to derived roles, and facing problem to design the unit testing approach for the same. can you please suggest what must unit testing approach and how to create test cases for authorizations specificaly to derived roles created from value based roles ?
goal is after testing, end users should not feel any changes done in roles approach.
Thanks.
Regards,
Swapnil
<removed_by_moderator>
Edited by: Julius Bussche on Oct 7, 2008 3:40 PMHi Swapnil,
The Testing of Security roles need to be taken in a two step approach
Step 1 Unit Testing in DEV
A. Prepare the test cases for each of the derived roles and ensure that your main focus is to see if you are able to execute all the tcodes that have been derived from the parent role with out authorization errors. You also need to verify if each of the derived roles are applicable to those respective Org level Values.
B. Because there will not enough data in DEV ( except In some cases where you have a refresh of fresh PROD data) it is always advisable to do the actual testing of the roles in QA. The goal here is to see if you are able to perform a dry run of all tcodes/Reports/Programs that belong to the roles.
C. You may create fewer Unit test ids as you only assign one ID with one role and once the role is tested you can assign the same ID to another role.
Step 2 Integration Testing in QA
A. Prepare the Integration Test cases for each of the Derived roles. Here most likely the testing will be performed by the end users/Business Analysts in that respective Business Process. Each test case must reflect the possible Org level Authorization Objects and Values that need to be tested.
B. As Integration testing is simulation of actual Production authorizations scenario, care must be taken when creating mulitple Integration test user ids and assigning them right roles and send the ids to the end users to perform the testing in QA.
C. The objective here is that end user must feel comfortable with the test cases and perform both Positive and Negative testing. Testing results must be caputured and documented for any further analysis.
D. In an event of any authorization errors from Integration testing, the authorization errors will be sent to the Security team along with SU53 screenshots. The roles will be corrected in DEV and transported back to QA and the testing continues.
E. Also the main objective of Integration testing would be to check if the transactions are reflecting the right amount of data when executed and any mismatch in the data will be direct implication that the Derived roles do not contain the right Org level values.
Hope this helps you to understand how testing of Security roles (Derived) is done at a high level.
Regards,
Kiran Kandepalli.
Edited by: Kiran Kandepalli on Oct 7, 2008 5:47 AM -
Approach to tune a query in short time
Hi All,
Oracle 10g I know this question is asked number of times and there are many good replies to them.
But I just want to know how to approach a completely new query ( like the task given to me to fine tume a query in 1 day when I dont have even the slightest idea about how to proceed) if the timeline is very stringent and by just looking at the explain plan, you have to take the decision.
I am just posting my query here and what I am looking for is some lead on how to identify the congetion point which is where this query takes long time ( in my case some 15 mins as reported to me)
select
"LEGAL ENTITY",
"Legal Entity Description",
"Cluster",
"Sub_Cluster",
"Account",
rownum,
"Moody_Rating",
"Process_Date",
"Merge_Description",
rownum,
"Merge_Description",
"is_id_ic",
"is_n",
"cusip",
"isin",
"credit_spread_PV01",
"amount",
"Market_Value",
"Currency",
"Sensitivity_Type",
"maturity_Date",
"Exception_Flag",
"Base_Security_Id",
DECODE(sign("Market_Value"),-1,DeCode(SigN("Recovery"),-1,"Recovery",('-'||"Recovery")), ABS("Recovery")) as "Recovery"
from
select
le.name "LEGAL ENTITY",
le.display_name "Legal Entity Description",
mn4.display_name "Cluster",
mn3.display_name "Sub_Cluster",
bookname.display_name "Account",
(SELECT RATING_NAME
FROM moody_rating
where moody_rating_id = i.moody_rating_id) "Moody_Rating",
to_char(to_date(:v_cob_date,'DD-MM-YY'),'YYYYMMDD') "Process_Date",
ss.issuer "Merge_Description",
PART.MARS_ISSUER "is_id_ic",
PART.PARTICIPANT_NAME "is_n",
NULL "cusip",
NULL "isin",
NULL "credit_spread_PV01",
NULL "amount",
sum(mtmsens.sensitivity_value) "Market_Value",
(SELECT distinct cc.CCY
FROM legacy_country CC
INNER JOIN MARSNODE MN ON CC.countryisocode = MN.NAME
and mn.close_date is null
INNER JOIN MARSNODETYPE MNT ON MN.TYPE_ID =
MNT.NODE_TYPE_ID
AND MNT.NAME = 'COUNTRY'
and mnt.close_date is null
where MN.NODE_ID = part.country_domicile_id
and cc.begin_cob_date <= :v_cob_date
and cc.end_cob_date > :v_cob_date
and rownum < 2) "Currency",
'CREDITSPREADMARKETVALUE' "Sensitivity_Type",
NULL "maturity_Date",
NULL "Exception_Flag",
NULL "Base_Security_Id",
sum(ss.sensitivity_value) "Recovery"
from staging_position sp
left JOIN position p on (
p.feed_instance_id = sp.feed_instance_id
AND p.feed_row_id = sp.feed_row_id)
left JOIN staging_instrument si on (si.feed_instance_id =
sp.feed_instance_id AND
si.position_key =
sp.position_key)
left join book b on (b.book_id = p.book_id and
b.begin_cob_date <= :v_cob_date and
b.end_cob_date > :v_cob_date)
left join marsnode bk on (b.book_id = bk.node_id and
bk.close_date is null)
left join marsnode le on (b.leg_ent_id = le.node_id and
le.close_date is null)
left join marsnode bookname on (bookname.node_id = p.book_id and
bookname.close_date is null)
left join marsnodelink mnl on p.book_id = mnl.node_id
and :v_bus_org_hier_id =
mnl.hierarchy_id
and mnl.close_date is null
and :v_cob_date >= mnl.begin_cob_date
and :v_cob_date < mnl.end_cob_date
left join marsnode mn on mn.node_id = mnl.parent_id
and mn.close_date is null
left join marsnodelink mnl2 on mn.node_id = mnl2.node_id
and :v_bus_org_hier_id =
mnl2.hierarchy_id
and mnl2.close_date is null
and :v_cob_date >= mnl2.begin_cob_date
and :v_cob_date < mnl2.end_cob_date
left join marsnode mn2 on mn2.node_id = mnl2.parent_id
and mn2.close_date is null
left join marsnodelink mnl3 on mn2.node_id = mnl3.node_id
and :v_bus_org_hier_id =
mnl3.hierarchy_id
and mnl3.close_date is null
and :v_cob_date >= mnl3.begin_cob_date
and :v_cob_date < mnl3.end_cob_date
left join marsnode mn3 on mn3.node_id = mnl3.parent_id
and mn3.close_date is null
left join marsnodelink mnl4 on mn3.node_id = mnl4.node_id
and :v_bus_org_hier_id =
mnl4.hierarchy_id
and mnl4.close_date is null
and :v_cob_date >= mnl4.begin_cob_date
and :v_cob_date < mnl4.end_cob_date
left join marsnode mn4 on mn4.node_id = mnl4.parent_id
and mn4.close_date is null
--sensitivity data
left JOIN STAGING_SENSITIVITY ss ON (ss.FEED_INSTANCE_ID =
sp.FEED_INSTANCE_ID AND
ss.FEED_ROW_ID =
sp.FEED_ROW_ID)
--sensitivity data
left JOIN STAGING_SENSITIVITY mtmsens ON (mtmsens.FEED_INSTANCE_ID =
sp.FEED_INSTANCE_ID AND
mtmsens.FEED_ROW_ID =
sp.FEED_ROW_ID)
LEFT join xref_domain_value_map XREF on (XREF.Src_Value =
ss.issuer and
XREF.close_action_id is null and
XREF.Begin_Cob_Date <=
:v_cob_date and
XREF.End_Cob_Date >
:v_cob_date AND
xref.domain_map_id = 601 AND
xref.source_system_id = 307 AND xref.ISSUE_ID is not null)
Left join ISSUE i on (i.issue_id = xref.issue_id)
LEFT join participant PART ON (PART.PARTICIPANT_ID =
XREF.TGT_VALUE and
PART.Close_Action_Id is null and
PART.Begin_Cob_Date <= :v_cob_date and
PART.End_Cob_Date > :v_cob_date)
left join moody_rating RATING on (rating.moody_rating_id =
i.MOODY_RATING_ID)
where sp.feed_instance_id in
(select fbi.feed_instance_id
from feed_book_status fbi ,
feed_instance fi
where fbi.cob_date = :v_cob_date
and fbi.feed_instance_id = fi.feed_instance_id
and fi.feed_id in (
select feed_id from feed_group_xref where feed_group_id in (
select feed_group_id from feed_group where description like 'CDO Feeds')
and close_action_id is null
and sp.Feed_Row_Status_Id = 1
and ss.sensitivity_type = 'CREDITSPREADDEFAULT'
and mtmsens.sensitivity_type = 'MTMVALUE'
and le.name='161'
group by le.name,
le.display_name,
mn3.display_name,
mn4.display_name,
mn.display_name,
i.moody_rating_id,
ss.issuer,
PART.MARS_ISSUER,
PART.PARTICIPANT_NAME,
sp.feed_instance_id,
part.country_domicile_id,
bookname.display_name) And the explain plan
SELECT STATEMENT, GOAL = CHOOSE Cost=19365 Cardinality=1 Bytes=731
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MOODY_RATING Cost=1 Cardinality=1 Bytes=9
INDEX UNIQUE SCAN Object owner=MARS Object name=PK_MOODY_RATING Cost=0 Cardinality=1
HASH UNIQUE Cost=77 Cardinality=1 Bytes=488
COUNT STOPKEY
HASH JOIN Cost=76 Cardinality=1 Bytes=488
NESTED LOOPS Cost=68 Cardinality=1 Bytes=460
HASH JOIN Cost=66 Cardinality=1 Bytes=450
HASH JOIN Cost=59 Cardinality=1 Bytes=412
NESTED LOOPS Cost=51 Cardinality=1 Bytes=402
HASH JOIN Cost=49 Cardinality=1 Bytes=392
NESTED LOOPS Cost=42 Cardinality=1 Bytes=359
NESTED LOOPS Cost=40 Cardinality=1 Bytes=349
NESTED LOOPS Cost=37 Cardinality=1 Bytes=300
NESTED LOOPS Cost=34 Cardinality=1 Bytes=251
HASH JOIN Cost=32 Cardinality=1 Bytes=241
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=27
NESTED LOOPS Cost=24 Cardinality=1 Bytes=231
NESTED LOOPS Cost=21 Cardinality=1 Bytes=204
NESTED LOOPS Cost=18 Cardinality=1 Bytes=171
NESTED LOOPS Cost=16 Cardinality=1 Bytes=136
NESTED LOOPS Cost=13 Cardinality=1 Bytes=86
NESTED LOOPS Cost=10 Cardinality=1 Bytes=37
VIEW Object owner=MARS Cost=7 Cardinality=1 Bytes=10
FILTER
CONNECT BY WITH FILTERING
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK
INDEX RANGE SCAN Object owner=MARS Object name=FKI_15632_PARENT_ID Cost=3 Cardinality=250 Bytes=2500
HASH JOIN Cost=5 Cardinality=1 Bytes=62
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHYROOT Cost=2 Cardinality=5 Bytes=175
NESTED LOOPS
CONNECT BY PUMP
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK Cost=7 Cardinality=1 Bytes=39
INDEX RANGE SCAN Object owner=MARS Object name=IDX_MNL_HI_PI_NI Cost=3 Cardinality=4
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=27
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=49
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=50
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODETYPE Cost=2 Cardinality=1 Bytes=35
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODETYPE Cost=1 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=NODE_ASSOC Cost=3 Cardinality=1 Bytes=33
INDEX RANGE SCAN Object owner=MARS Object name=PK_NODE_ASSOC Cost=1 Cardinality=3
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
VIEW Object owner=MARS Cost=7 Cardinality=1 Bytes=10
FILTER
CONNECT BY WITH FILTERING
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK
INDEX RANGE SCAN Object owner=MARS Object name=FKI_15632_PARENT_ID Cost=3 Cardinality=250 Bytes=2500
HASH JOIN Cost=5 Cardinality=1 Bytes=62
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHYROOT Cost=2 Cardinality=5 Bytes=175
NESTED LOOPS
CONNECT BY PUMP
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK Cost=7 Cardinality=1 Bytes=39
INDEX RANGE SCAN Object owner=MARS Object name=IDX_MNL_HI_PI_NI Cost=3 Cardinality=4
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1 Bytes=10
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=NODE_ASSOC Cost=3 Cardinality=1 Bytes=49
INDEX RANGE SCAN Object owner=MARS Object name=PK_NODE_ASSOC Cost=1 Cardinality=3
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=49
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1 Bytes=10
VIEW Object owner=MARS Cost=7 Cardinality=1 Bytes=33
FILTER
CONNECT BY WITH FILTERING
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK
INDEX RANGE SCAN Object owner=MARS Object name=FKI_15632_PARENT_ID Cost=3 Cardinality=250 Bytes=2500
HASH JOIN Cost=5 Cardinality=1 Bytes=62
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHYROOT Cost=2 Cardinality=5 Bytes=175
NESTED LOOPS
CONNECT BY PUMP
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK Cost=7 Cardinality=1 Bytes=39
INDEX RANGE SCAN Object owner=MARS Object name=IDX_MNL_HI_PI_NI Cost=3 Cardinality=4
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1 Bytes=10
VIEW Object owner=MARS Cost=7 Cardinality=1 Bytes=10
FILTER
CONNECT BY WITH FILTERING
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK
INDEX RANGE SCAN Object owner=MARS Object name=FKI_15632_PARENT_ID Cost=3 Cardinality=250 Bytes=2500
HASH JOIN Cost=5 Cardinality=1 Bytes=62
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHYROOT Cost=2 Cardinality=5 Bytes=175
NESTED LOOPS
CONNECT BY PUMP
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK Cost=7 Cardinality=1 Bytes=39
INDEX RANGE SCAN Object owner=MARS Object name=IDX_MNL_HI_PI_NI Cost=3 Cardinality=4
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
VIEW Object owner=MARS Cost=7 Cardinality=1 Bytes=38
FILTER
CONNECT BY WITH FILTERING
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK
INDEX RANGE SCAN Object owner=MARS Object name=FKI_15632_PARENT_ID Cost=3 Cardinality=250 Bytes=2500
HASH JOIN Cost=5 Cardinality=1 Bytes=62
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHYROOT Cost=2 Cardinality=5 Bytes=175
NESTED LOOPS
CONNECT BY PUMP
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK Cost=7 Cardinality=1 Bytes=57
INDEX RANGE SCAN Object owner=MARS Object name=IDX_MNL_HI_PI_NI Cost=3 Cardinality=4
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=36
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1 Bytes=10
VIEW Object owner=MARS Cost=7 Cardinality=1 Bytes=28
FILTER
CONNECT BY WITH FILTERING
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK
INDEX RANGE SCAN Object owner=MARS Object name=FKI_15632_PARENT_ID Cost=3 Cardinality=250 Bytes=2500
HASH JOIN Cost=5 Cardinality=1 Bytes=62
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHYROOT Cost=2 Cardinality=5 Bytes=175
NESTED LOOPS
CONNECT BY PUMP
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK Cost=7 Cardinality=1 Bytes=57
INDEX RANGE SCAN Object owner=MARS Object name=IDX_MNL_HI_PI_NI Cost=3 Cardinality=4
TABLE ACCESS FULL Object owner=MARS Object name=MARSHIERARCHY Cost=2 Cardinality=1 Bytes=27
COUNT
VIEW Object owner=MARS Cost=19365 Cardinality=1 Bytes=731
HASH GROUP BY Cost=19365 Cardinality=1 Bytes=1112
NESTED LOOPS OUTER Cost=19364 Cardinality=1 Bytes=1112
NESTED LOOPS OUTER Cost=19361 Cardinality=1 Bytes=1040
NESTED LOOPS OUTER Cost=19361 Cardinality=1 Bytes=1037
NESTED LOOPS OUTER Cost=19360 Cardinality=1 Bytes=1019
NESTED LOOPS OUTER Cost=19357 Cardinality=1 Bytes=951
NESTED LOOPS OUTER Cost=19354 Cardinality=1 Bytes=914
NESTED LOOPS OUTER Cost=19351 Cardinality=1 Bytes=877
NESTED LOOPS OUTER Cost=19337 Cardinality=1 Bytes=820
NESTED LOOPS OUTER Cost=19334 Cardinality=1 Bytes=783
NESTED LOOPS OUTER Cost=19320 Cardinality=1 Bytes=726
NESTED LOOPS OUTER Cost=19317 Cardinality=1 Bytes=707
NESTED LOOPS OUTER Cost=19303 Cardinality=1 Bytes=650
NESTED LOOPS OUTER Cost=19300 Cardinality=1 Bytes=613
NESTED LOOPS Cost=19285 Cardinality=1 Bytes=556
NESTED LOOPS Cost=19280 Cardinality=1 Bytes=443
NESTED LOOPS OUTER Cost=19275 Cardinality=1 Bytes=330
HASH JOIN RIGHT SEMI Cost=17457 Cardinality=1 Bytes=248
VIEW Object owner=SYS Object name=VW_NSO_1 Cost=1119 Cardinality=30 Bytes=150
HASH JOIN Cost=1119 Cardinality=30 Bytes=2040
TABLE ACCESS FULL Object owner=MARS Object name=FEED_GROUP Cost=2 Cardinality=5 Bytes=120
HASH JOIN Cost=1116 Cardinality=1607 Bytes=70708
TABLE ACCESS FULL Object owner=MARS Object name=FEED_GROUP_XREF Cost=13 Cardinality=701 Bytes=14721
HASH JOIN Cost=1102 Cardinality=3602 Bytes=82846
INDEX RANGE SCAN Object owner=MARS Object name=IDX_FBS_CD_FII_BI Cost=22 Cardinality=3602 Bytes=46826
TABLE ACCESS FULL Object owner=MARS Object name=FEED_INSTANCE Cost=1024 Cardinality=670264 Bytes=6702640
NESTED LOOPS Cost=16337 Cardinality=324 Bytes=78732
HASH JOIN Cost=14324 Cardinality=1977 Bytes=302481
NESTED LOOPS OUTER Cost=11 Cardinality=1 Bytes=114
NESTED LOOPS Cost=8 Cardinality=1 Bytes=95
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=5 Cardinality=1 Bytes=59
INDEX RANGE SCAN Object owner=MARS Object name=IDX_NODE1 Cost=3 Cardinality=2
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=BOOK Cost=3 Cardinality=2 Bytes=72
INDEX RANGE SCAN Object owner=MARS Object name=IDX_BOOK_LEI_BCD Cost=2 Cardinality=4
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=19
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
PARTITION RANGE ALL Cost=13995 Cardinality=3854299 Bytes=150317661
TABLE ACCESS FULL Object owner=MARS Object name=POSITION Cost=13995 Cardinality=3854299 Bytes=150317661
PARTITION RANGE ITERATOR Cost=2 Cardinality=1 Bytes=90
PARTITION HASH ITERATOR Cost=2 Cardinality=1 Bytes=90
TABLE ACCESS BY LOCAL INDEX ROWID Object owner=MARS Object name=STAGING_POSITION Cost=2 Cardinality=1 Bytes=90
INDEX UNIQUE SCAN Object owner=MARS Object name=PK_STAGINGPOSITON Cost=1 Cardinality=1
PARTITION HASH ITERATOR Cost=1819 Cardinality=1 Bytes=82
TABLE ACCESS BY LOCAL INDEX ROWID Object owner=MARS Object name=STAGING_INSTRUMENT Cost=1819 Cardinality=1 Bytes=82
INDEX RANGE SCAN Object owner=MARS Object name=PK_STAGINGINSTRUMENT Cost=9 Cardinality=2551
PARTITION RANGE ITERATOR Cost=5 Cardinality=1 Bytes=113
PARTITION HASH ITERATOR Cost=5 Cardinality=1 Bytes=113
TABLE ACCESS BY LOCAL INDEX ROWID Object owner=MARS Object name=STAGING_SENSITIVITY Cost=5 Cardinality=1 Bytes=113
INDEX RANGE SCAN Object owner=MARS Object name=IDX_SENSITIVITY_FEED_ROW_ID Cost=3 Cardinality=8
PARTITION RANGE ITERATOR Cost=5 Cardinality=1 Bytes=113
PARTITION HASH ITERATOR Cost=5 Cardinality=1 Bytes=113
TABLE ACCESS BY LOCAL INDEX ROWID Object owner=MARS Object name=STAGING_SENSITIVITY Cost=5 Cardinality=1 Bytes=113
INDEX RANGE SCAN Object owner=MARS Object name=IDX_SENSITIVITY_FEED_ROW_ID Cost=3 Cardinality=8
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK Cost=14 Cardinality=1 Bytes=57
INDEX RANGE SCAN Object owner=MARS Object name=FKI_15632_NODE_ID Cost=2 Cardinality=14
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=37
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK Cost=14 Cardinality=1 Bytes=57
INDEX RANGE SCAN Object owner=MARS Object name=FKI_15632_NODE_ID Cost=2 Cardinality=14
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=19
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK Cost=14 Cardinality=1 Bytes=57
INDEX RANGE SCAN Object owner=MARS Object name=FKI_15632_NODE_ID Cost=2 Cardinality=14
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=37
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODELINK Cost=14 Cardinality=1 Bytes=57
INDEX RANGE SCAN Object owner=MARS Object name=FKI_15632_NODE_ID Cost=2 Cardinality=14
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=37
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=MARSNODE Cost=3 Cardinality=1 Bytes=37
INDEX RANGE SCAN Object owner=MARS Object name=PK_MARSNODE Cost=2 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=XREF_DOMAIN_VALUE_MAP Cost=3 Cardinality=1 Bytes=68
INDEX RANGE SCAN Object owner=MARS Object name=IDX_XDVM_DMI_SV_BCD Cost=2 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=ISSUE Cost=1 Cardinality=1 Bytes=18
INDEX UNIQUE SCAN Object owner=MARS Object name=PK_ISSUE Cost=0 Cardinality=1
INDEX UNIQUE SCAN Object owner=MARS Object name=PK_MOODY_RATING Cost=0 Cardinality=1 Bytes=3
TABLE ACCESS BY INDEX ROWID Object owner=MARS Object name=PARTICIPANT Cost=3 Cardinality=1 Bytes=72
INDEX RANGE SCAN Object owner=MARS Object name=PK_PARTICIPANT Cost=2 Cardinality=1Hi,
in your explain plan:
HASH JOIN RIGHT SEMI Cost=17457 Cardinality=1 Bytes=248
VIEW Object owner=SYS Object name=VW_NSO_1 Cost=1119 Cardinality=30 Bytes=150
HASH JOIN Cost=1119 Cardinality=30 Bytes=2040
TABLE ACCESS FULL Object owner=MARS Object name=FEED_GROUP Cost=2 Cardinality=5 Bytes=120
HASH JOIN Cost=1116 Cardinality=1607 Bytes=70708
TABLE ACCESS FULL Object owner=MARS Object name=FEED_GROUP_XREF Cost=13 Cardinality=701 Bytes=14721
HASH JOIN Cost=1102 Cardinality=3602 Bytes=82846
INDEX RANGE SCAN Object owner=MARS Object name=IDX_FBS_CD_FII_BI Cost=22 Cardinality=3602 Bytes=46826
TABLE ACCESS FULL Object owner=MARS Object name=FEED_INSTANCEThis part has the highest costs (this doesn't always mean it is slow). So this leads me to the WHERE clause where feed_group, feed_group_xref and feed_instance full are used. Maybe this can be improved, although the cardinality is not that high, so a full table can be the best. So the question is can indexes help here?
Furthermore there is the full table scan on POSITION:
TABLE ACCESS FULL Object owner=MARS Object name=POSITION Cost=13995 Cardinality=3854299 Bytes=150317661This looks also a large tabel (3 million + records), so is it possible to get this part smaller?
Herald ten Dam
http://htendam.wordpress.com -
Need info on the approach for building admin forms for authoring
Hi,
I have a requirement to store the content in the content repository. I have two levels of data to be stored i.e. continents & countries; both continents & countries have their individual properties; continent contains countries node. I need to build a mechanism to store the content at a particular location in the JCR. The content authors should be able to add, edit or delete the continents & countries.
What's the best way to build this admin form? Should the custom node types be created? Should a custom action type to be built? To support dynamism on UI i.e. add, edit & delete in a single form - how can it be done efficiently?
If someone can provide me pointers, I will start with my build. Let me know if you need more information.
Thanks,
ChetanyaJustin,
The approach you listed sounds easy to implement.
However I want to understand why out of all we choose cq:Page and why not nt:unstructured or sling:orderedFolder? Can sorting be enabled in this case?
Also I have various scenarios where I need to store different form of data. Another example is storing product information in a alphebetical manner. There are lot of products to be stored and needs to be displayed in a tab format. The tab looks something like this A-E, F-L, M-S, T-Z, which should be configurable. Do you recommend creating one page per alphebet e:g A, B, C... and add all products (component) starting with A under page A, and so on? Or is there a better way to implement?
Can something like calendar event be used here where in year, month and day node gets created first and then the event node is added. If so, in the above scenario how can the nodes A, B, C... be autocreated when products are added? Also the products need to be always sorted.
Thanks,
Chetanya -
How should IT Departments approach the App Store?
This question is in the capacity as an IT Administrator for a small/mid size company. We have 4 Macs, 8 iPads, and 3 iPhones. I have 1 Mac and 1 iPad at work that I use. 2 years ago we considered the options with how to support Apple ID accounts, iTunes, App Store, and the payment method(s) which need to be on-file.
Given our experience with accidental app and in-app purchases (according to the employees), the accounting department and the president decided after a conference call with Apple Support using Apple Care that rather than use corporate credit cards on file with corporate apple IDs, we would switch to a model that de-coupled the purchasing approval with installation approval. The old method required us to review the monthly corporate card accounts, then chase down the people to make sure proper purchasing approval had been sought prior to app or in-app purchase. There was no incentive for the employees seek prior approval even though one would expect professional adults to do so. Currently the company notes that it would take on average 8 weeks for an employee to reimburse the company for a recreational purchase.
The replacement system currently in use now is for all Apple IDs to be personal accounts with personal credit cards. Then employees are responsible for submitting an expense report and risk having the expense report request denied. The thinking was that this would curtail the accidental app or in-app purchases as the employee would risk not being reimbursed for an unapproved purchase. This system worked and accidental reimbursed purchases went down to zero for employees for whom this was a problem with corporate credit cards.
This is the model we use for iPads and iPhones. Up until now, all Mac software purchases were shink-wrap purchased online from Apple or via an Apple VAR. The push for the Mac to use the App Store changes this.
MY QUESTION IS: if I buy the Mac OS X 10.7 Lion or 10.8 Mountain Lion with my personal Apple ID, can I install it on the other company Macs? Will those users need to know my Apple ID password? Or does the OS install without needing the Apple ID password? What if a user needs me to install his or her OS on a company issued Mac, where he or she bought the OS under his or her Apple ID, can I install that OS without knowing his or her credentials? Also can I install my copy of the OS on that company Mac and would future OS updates run without having to give that end user my own Apple ID Password? Can I install the OS using my Apple ID then hand it off to him or her and let he or she run updates with his or her Apple ID? Does it work that way? We would rather not share passwords as each of our accounts has a personal credit card on file.
We would consider corporate Apple ID accounts if the following requirements can be implemented. All app or in-app purchases should fire off an event which goes first to the employee’s direct manager then next to the company’s controller for approval. If approved, then the app store or iTunes store should allow the end user to install the purchased item. All company purchases should be kept under NET30 terms, then invoiced at the end of the month and paid all at once. These are the requirements from our accounting department. This is how all of our other vendors (IBM, Microsoft, Oracle, etc.) are paid.
The App Store seems to exist in some new virtual reality which our accounting department and legal/policy.compliance departments cannot comprehend. I know IT Departments often get blamed for this, however, in our case that would just be shooting-the-messenger. I am just trying to understand how all of this should work. It would seem to me that this new consumer-cloud-model exists the way we have Apple IDs currently set up. The end-user interfaces with the App Store, then works directly with accounting to pay for the services. It doesn't seem like IT needs to be involved anymore. This is currently how we are attempting to approach this consumer cloud App Store model. The upside is it let's IT focus on projects which directly require our unique expertise rather than play App-Store-middlemen with no value-add.
You thoughts?See also "Apple Support Communities > Mac App Store > Using Mac App Store > Discussions > purchase order" https://discussions.apple.com/message/15455935#15455935
However, that cited solution won't work for us as the same Apple ID password would all for both purchases and installations. We need to de-couple those 2 roles according to guidelines from our corporate office. The decision making (purchasing) and adminstrative/administraton/installation roles need to be separate and discrete. It is also similar to an enterprise model when developers write code in DEV environment,move it to a QA environment for testing, then a separate IT administrator (or Install Team) needs to copy the new code into the PROD environment. -
COST CENTER CHANGES FOR OPEN PO REQUIRE BEST APPROACH HOW TO DO IT
Dear All,
we are changing the cost center for open PO's kindly tell us the best approach how to do it for all open PO's line items for open PO's are 4000.
we will totally block the old PO's and change the PO's with NO GR and IR immediately, but what and how to do for if there are GR, IR, or one of them their. and also what if IR and GR differences are thier, kindly provide all the best possible approaches.
below are the scenarios.
Open PO without GR/IR
Open PO only with GR
Open PO only with IR
Open PO with IR/GR without difference
Open PO with IR/GR with differences
Service entry sheet
kindly provide me all the best approaches to achieve this task. keep in mind beside reversal of GR or IR any other approach.
qsm sap
Edited by: qsm sap on Feb 15, 2010 12:08 PMHi,
Open PO without GR/IR
Open PO only with GR
Make Acct assigment as changeable at time of IR in SPRO for Acct Assigmet 'K' ...so that u can change the cost Center while doing MIRO. if u do not want to go with mass change.
Service entry sheet
You can change Cost center while Doing SES. No issue
for others... reverse the IR as one option.
Regards,
Pardeep Malik -
COST CENTER CHNAGES FOR OPEN PO"S BEST APPROACH
Dear All,
we are changing the cost center for open PO's kindly tell us the best approach how to do it for all open PO's line items for open PO's are 4000.
we will totally block the old PO's and change the PO's with NO GR and IR immediately, but what and how to do for if there are GR, IR, or one of them their. and also what if IR and GR differences are thier, kindly provide all the best possible approaches.
below are the scenarios.
Open PO without GR/IR
Open PO only with GR
Open PO only with IR
Open PO with IR/GR without difference
Open PO with IR/GR with differences
Service entry sheet
kindly provide me all the best approaches to achieve this task.
qsm sapHi,
Open PO without GR/IR
Open PO only with GR
Make Acct assigment as changeable at time of IR in SPRO for Acct Assigmet 'K' ...so that u can change the cost Center while doing MIRO. if u do not want to go with mass change.
Service entry sheet
You can change Cost center while Doing SES. No issue
for others... reverse the IR as one option.
Regards,
Pardeep Malik -
Confuse on PR & PO data migration approach new to MM module
Hi All,
I'm pretty confuse with the PO data migration approach when it comes to PO is GR or partial GR or GR&IR. I'm hoping that someone can enlighten me. i understand that we typically don't migrate PO when it is GR & IR, FI team usually will bring over to the new system as an vendor open item in AP. How about the PR or PO which have gone through or half release strategy? What is the follow up process? I have created a criteria table below. How Could someone point me in the right direction? Thanks in advance.
PR
Criteria
Data migration required
Notes
Open and Released
Y
Open and not Released
Y
Flag for Deletion
N
Flag for Block
Y
PO
Criteria
Data migration required
Notes
Open and Released
Y
Open and not Released
Y
GR but no IR
GR & IR
N
AP will bring over as open item
Flag for Deletion
N
Flag for Block
Y
Partial
Y
For partial GR to recreate PO only with missing GR quantity
Regards,
JohnHi John,
The approach that i have followed recently is that we have considered PO as the base document and converted other documents based on the PO condition. This means you first need to see if the PO is to be converted or not. Then you can proceed to convert the related documents like PR, Agreement, Info record, Source list, etc.
Also open qty for PO should be considered for Material and Service line items both.
Once a GR/SES is created, it gets updated in the PO history table EKBE with its related transaction/event type.i.e. EKBE-VGABE = 1 for GR and 9 for SES. Quantity and value also gets updated in case of material and services. You can compare this consumed quantities with PO quantity.
Please see below from SCN and let me know if you need more info on PR or PO conversion.
Purchase Requisition Conversion Strategy
Thanks,
Akash -
How to approach this requirment
Business overview:
For every organisation account management will be the core functionality. The account management should include the following:
customer - company - vendor.
1. customer info
2. vendor info
3.organisation info
4.material info
5.purchase order info
6.sales order info
7.subsequent documents such as delivery doc,invoice doc n accounting doc info.
Reports:
1.purchase order line item wise report.
2.open purchase order.
3.sales order line item wise report.
4.open sales order report.
5.open invoice item wise.
Im new to ABAP, pl someone guide me how to approach this requirment. Which r the tables n fields i need to work with?
Thank you.Hi Ashwini,
I am giving you some of the important details of tables as per your requirement.But there could be more tables than i am providing.
Just a list of tables that come in handy.
Sales orders
Name Description Uses
LIKP Shipped Lines header
LIPS Shipped Lines detail
VBAK Order header Every order (unless archiving)
VBAP Table fields Every line item (unless archiving)
VBBE Open sales order line items Great file, but be careful.Contents don't reflect orders
that do not affect purchasing (go figure).
VBEP Schedule line item
VBFA Document flow Let's you move from order to shipping document to invoice.
VBUK Order status
VBUP Line item detail status
VBFK Invoicing header
VBFP Invoicing detail
Material Management
Name Description Uses
MARA Inventory Master
MARC Plant Data
MARD Current Inventory
MAKT Descriptions
MBEW Material Valuation
T179 Product Hierarchy
MVKE Sales data (materials)
MKPF Material document Status code 'R' in VBFA
Purchasing
Name Description Uses
EINA Purchasing inforecord by MATNR/LIFNR contains things like vendor material
number and access key for EINE
EINE Purchasing inforecord detail Contains minimum purchase, group, currency
EKPO Purchase orders
EKET Scheduled lines
EKES Vendor confirmed lines
IKPF Header- Physical Inventory Document
ISEG Physical Inventory Document Items
LFA1 Vendor Master (General section)
LFB1 Vendor Master (Company Code)
NRIV Number range intervals
RESB Reservation/dependent requirements
T161T Texts for Purchasing Document Types
Forecasting
Name Description Uses
MAPR
PROP
PROW
Classification
Name Description Uses
KSSK Material number to class
KLAS Class description
KSML Characteristic name
CABN/CABNT Characteristic name description
CAWN/CAWNT Characteristic name
AUSP Numeric values
CAUFV Service order header
AFPO Service order line Holds items that will create "reservations"
RESB SM Reservations Materials needed for line
Customer Data
KNA1 Customer Master
KNVV Sales information
KNVP Partners (ship-to, etc)
Since you are new to abap, its better you to know the system tables and other tables for configuring:
System tables
Name Description Uses
DD02T Table texts
DD03L Table fields Lists the fields in a table
DD04T Data element texts
USR02 Valid user names
Config tables (normally begin with "T")
Name Description Uses
T001 Client table
T002 Languages
T005 Region (Country)
TCURR Currency and exchange rates
TVAK Order type
TVSB Shipping condition
TVAGT Rejected reason for order line
Other tables
Name Description Uses
STXH Text header
STXL Text detail
Reward points if useful.
Thnakyou,
Regards. -
How to approach this requirement
Business overview:
For every organisation account management will be the core functionality. The account management should include the foll:
customer - company - vendor.
1. customer info
2. vendor info
3.organisation info
4.material info
5.purchase order info
6.sales order info
7.subsequent documents such as delivery doc,invoice doc n accounting doc info.
Reports:
1.purchase order line item wise report.
2.open purchase order.
3.sales order line item wise report.
4.open sales order report.
5.open invoice item wise.
Im new to ABAP, Please someone guide me how to approach this requirement. Which are the tables and fields i need to work with?
Thank You.
AshwiniHi:
Refer to SAP Tables in this documentation.
http://www.erpgenie.com/abap/tables.htm
You will find the realted fields in the weblink.If you are not unable to find the field and table, go to functional consultant and ask him about the filed.
he will show you and click on F1.you can see the techncail information.Click on this and you will see table name and field name.
Please let me know if you need more information.
Assign points if useful.
Regards
Sridhar M -
Price change summary report & approach of price change on Sales Orders
Hi,
I have made the setups for updating the price on Sales Order via profile options (i update the list price field on SO line the SO line price gets updated). The Customer have manual price overide in their existing system in place so they want same in oracle system as well. Their price change doesn't have any serious logic,,it is quite erratic based on market condition on that day.
(1) How can I get the report fro the changed price wrt price on price list for all the items on Sales Orders (during a period). It seems, Audit trial functionality for changing list price is not available.
(2) For the system whether required price change as above is better approach or maintaining new price list all the time is preferable option. In case of new price list, do we have any standard report which fetches itemwise price change details on Sales Orders for a period.
Thanks.
With Best Regards,
Nirabh NayanNirabh,
Did you say you update the List Price itself in the Order Lines? In my opinion you should never update List Price. Set the profile OM: Discounting Privilege to 'Unlimited' to allow update of Selling Price, but switch off 'OM: List Price Override Privilege' for the responsibilities so that List Price field is not Editable. So that List Price always reflect the price with which Order Line was created (From the price List). Now create a custom report wherever Unit Selling Price does not match Unit List Price.
If you really want to go a little further then create a modifier with Application method 'New Price' that should kick in everytime the Unit Selling Price is updated. Let me know if that helped.
Dipanjan
Edited by: Dipanjan Maitra on May 25, 2012 2:25 PM -
FDM & ERPI to FDMEE - Upgrade approach/challenges ?
Hello,
We are in FDM/ERPI 11.1.2.2.300 (used for loading data from EBS and file systems into HFM 11.1.2.2.300) and planning to upgrade (within the existing environment [in-place upgrade]) to 11.1.2.3. As we see that FDM/ERPI are integrated together as FDMEE in 11.1.2.3, we had big grey area on how FDMEE would function in contrast to disparate FDM/ERPI.
We needed inputs from veterans or anyone who had experienced the upgrade to answer the follwing queries.
Queries:
[1] We have come to know that FDM and ERPI are integrated together as FDMEE in 11.1.2.3.
[a] Since we have FDM and ERPI as separate components in 11.1.2.2.300, how should FDM and ERPI components be upgraded to FDMEE ? Any specific steps / approach that need to be followed ?
[b] Are there any known issues / challenges involved in upgrading from disparate FDM & ERPI to FDMEE ?
[c] How would ERPI artifacts (of 11.1.2.2.300) function in FDMEE - Are there any specific migration or any manual set-up required post-upgrade (to 11.1.2.3) to have ERPI artifacts (like source system, period mapping, location, import formats, data load mapping etc) functioning in FDMEE ?
[d] Similarly how would FDM artifacts (from 11.1.2.2.300) function in FDMEE ? Are there any specific migration or manual set-up required post-upgrade (to 11.1.2.3) to have FDM artifacts (like mapping, validation rules etc) functioning in FDMEE ?
[2] Since only ODI 11.1.1.7 is compatible with 11.1.2.3 (from Compatability matrix), how do we upgrade ODI from 11.1.1.6 to 11.1.1.7 ? Are there any patch available to upgrade or should we uninstall ODI 11.1.1.6 and install ODI 11.1.1.7 ?
Any insightful response will be helpful for us as it would help us to gain clarity/confidence and comfort in upgrading to 11.1.2.3.
Regards,
Sathishhello Sathish
As you are upgrading from 11.1.2.2.300 to EPM 11.1.2.3 (within the existing environment [in-place upgrade]) , it's called Applying maintenance release EPM 11.1.2.3 on EPM 11.1.2.2
[a] when you apply maintenance release , FDM 11.1.2.2 gets upgraded to FDM 11.1.2.3 ,
and ERPi gets upgraded to FDMEE 11.1.2.3
[b] i have not faced any issue while upgrading FDM
[c] please check epm_install.pdf for EPM 11.1.2.3 for upgradation task for FDM and FDMEE (see page 257 / 258)
regards,
-DM
Maybe you are looking for
-
BED & Higher education cess are not appearing in J1IIN in export sales
Dear Experts, we have scenario, - 1000 & 2000 Plants are excisable - Excise group is same for 1000 & 2000 plant - Serial Group is different for 1000 & 2000 plant. we enter materials from 1000 & 2000 plant for creating a export sales order ( IN one e
-
How to specify which cert to use for software virt server?
Is there a way to specify which certificate to use for each software virtual server? So, for example a user hitting https://somewhere.com would get the certificate for somewhere.com while the virtual server https://somewhereelse.com would use the cer
-
Hi, I am applying search filters on report. I have created two date items :P57_GL_F_DATE and :P57_GL_T_DATE . I have written code in report to search record between two dates .but that is not working. How can i do this? Working with apex4.1 select gl
-
Launching Web App from externally...HELP!!
hi everyone, I am using the application server 8.2 that is bundled with JSC2. I was wondering if there was a tutorial anywhere, or if someone can describe a procedure, for giving access to a web application from outside ther server's network. Current
-
Encore CS3 library install fails on win7 64bit pro
Hi Just installing my discs from Premiere Pro CS3 onto a win 7 64 bit pro PC, which was my Win X 32bit PC but with all new components except Power supply unit and my Audigy X-Fi elite pro sound card. Exit Kaspersky IS 2014. Premiere CS3 failed to go