Best Practice for Using Static Data in PDPs or Project Plan
Hi There,
I want to make custom reports using PDPs & Project Plan data.
What is the Best Practice for using "Static/Random Data" (which is not available in MS Project 2013 columns) in PDPs & MS Project 2013?
Should I add that data in Custom Field (in MS Project 2013) or make PDPs?
Thanks,
EPM Consultant
Noman Sohail
Hi Dale,
I have a Project Level custom field "Supervisor Name" that is used for Project Information.
For the purpose of viewing that "Project Level custom field Data" in
Project views , I have made Task Level custom field
"SupName" and used Formula:
[SupName] = [Supervisor Name]
That shows Supervisor Name in Schedule.aspx
============
Question: I want that Project Level custom field "Supervisor Name" in
My Work views (Tasks.aspx).
The field is enabled in Task.aspx BUT Data is not present / blank column.
How can I get the data in "My Work views" ?
Noman Sohail
Similar Messages
-
Best practice for using static methods
When i want to call a static method, should i call:
1) classInstance.staticMethod()
or should i call
2) ClassName.staticMethod()??
is the first style bad programming practice?dubwai: which compiler?I had assumed that this was what the JLS specifies, but intsead, it goes into length how to make the runtime environment treat calls to static methods on instances as if they were static calls on the variable's type.
However, I imagine anyone creating a compiler would go ahead and compile calls to static methods on instances to static calls on the variable's type instead of going through the effort of making the runtime environment treat calls to static methods on instances as if they were static calls on the variable's type.
But of course, it is concievable that somone didn't in their compiler. I doubt it but it is possible. Sun does compile calls to static methods on instances to static calls on the variable's type:
public class Garbage
public static void main(String[] args)
Garbage g = null;
method();
g.method();
public static void method()
System.out.println("method");
public class playground.Garbage extends java.lang.Object {
public playground.Garbage();
public static void main(java.lang.String[]);
public static void method();
Method playground.Garbage()
0 aload_0
1 invokespecial #1 <Method java.lang.Object()>
4 return
Method void main(java.lang.String[])
0 aconst_null
1 astore_1
2 invokestatic #3 <Method void method()>
5 invokestatic #3 <Method void method()>
8 return
Method void method()
0 getstatic #4 <Field java.io.PrintStream out>
3 ldc #5 <String "method">
5 invokevirtual #6 <Method void println(java.lang.String)>
8 return -
Best practice for use of spatial operators
Hi All,
I'm trying to build a .NET toolkit to interact with Oracles spatial operators. The most common use of this toolkit will be to find results which are within a given geometry - for example select parish boundaries within a county.
Our boundary data is high detail, commonly containing upwards of 50'000 vertices for a county sized polygon.
I've currently been experimenting with queries such as:
select
from
uk_ward a,
uk_county b
where
UPPER(b.name) = 'DORSET COUNTY' and
sdo_relate(a.geoloc, b.geoloc, 'mask=coveredby+inside') = 'TRUE';
However the speed is unacceptable, especially as most of the implementations of the toolkit will be web based. The query above takes around a minute to return.
Any comments or thoughts on the best practice for use of Oracle spatial in this way will be warmly welcomed. I'm looking for a solution which is as quick and efficient as possible.Thanks again for the reply... the query currently takes just under 90 seconds to return. Here are the results from the execution plan ran in sql*:
Elapsed: 00:01:24.81
Execution Plan
Plan hash value: 598052089
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 156 | 46956 | 76 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 156 | 46956 | 76 (0)| 00:00:01 |
|* 2 | TABLE ACCESS FULL | UK_COUNTY | 2 | 262 | 5 (0)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| UK_WARD | 75 | 12750 | 76 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | UK_WARD_SX | | | | |
Predicate Information (identified by operation id):
2 - filter(UPPER("B"."NAME")='DORSET COUNTY')
4 - access("MDSYS"."SDO_INT2_RELATE"("A"."GEOLOC","B"."GEOLOC",'mask=coveredby+i
nside')='TRUE')
Statistics
20431 recursive calls
60 db block gets
22432 consistent gets
1156 physical reads
0 redo size
2998369 bytes sent via SQL*Net to client
1158 bytes received via SQL*Net from client
17 SQL*Net roundtrips to/from client
452 sorts (memory)
0 sorts (disk)
125 rows processed
The wards table has 7545 rows, the county table has 207.
We are currently on release 10.2.0.3.
All i want to do with this is generate results which fall in a particular geometry. Most of my testing has been successful i just seem to run into issues when querying against a county sized polygon - i guess due to the amount of vertices.
Also looking through the forums now for tuning topics... -
Best practice for putting binary data on the NMR
Hi,
We're creating a component that will consume messages off the NMR, encode them, and subsequently put them back on the NMR. What's the best practice for sending binary data over the NMR?
1. setContent()?
2. addAttachment()?
3. setProperty()?
If NormailzedMessage.setContent() is the desired approach, then how can you accomplish that?
Thanks,
BrucesetContent() is used only for XML messages. The recommended way to accommodate binary data is to use addAttachment().
-
What are the best practices for using the enhancement framework?
Hello enhancement framework experts,
Recently, my company upgraded to SAP NW 7.1 EhP6. This presents us with the capability to use the enhancement framework.
A couple of senior programmers were asked to deliver a guideline for use of the framework. They published the following statement:
"SAP does not guarantee the validity of the enhancement points in future releases/versions. As a result, any implemented enhancement points may require significant work during upgrades. So, enhancement points should essentially be used as an alternative to core modifications, which is a rare scenario.".
I am looking for confirmation or contradiction to the statement "SAP does not guarantee the validity of enhancement points in future releases/versions..." . Is this a true statement for both implicit and explicit enhancement points?
Is the impact of activated explicit and implicit enhancements much greater to an SAP upgrade than BAdi's and user exits?
Is there any SAP published guidelines/best practices for use of the enhancement framework?
Thank you,
Kimberly
Edited by: Kimberly Carmack on Aug 11, 2011 5:31 PMFound an article that answers this question quite well:
[How to Get the Most From the Enhancement and Switch Framework as a Customer or Partner - Tips from the Experts|http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/c0f0373e-a915-2e10-6e88-d4de0c725ab3]
Thank you Thomas Weiss! -
Best practice for using messaging in medium to large cluster
What is the best practice for using messaging in medium to large cluster In a system where all the clients need to receive all the messages and some of the messages can be really big (a few megabytes and maybe more)
I will be glad to hear any suggestion or to learn from others experience.
Shimipublish/subscribe, right?
lots of subscribers, big messages == lots of network traffic.
it's a wide open question, no?
% -
Best Practice for using multiple models
Hi Buddies,
Can u tell me the best practices for using multiple models in single WD application?
Means --> I am using 3 RFCs on single application for my function. Each time i am importing that RFC model under
WD --->Models and i did model binding seperately to Component Controller. Is this is the right way to impliment multiple models in single application ?It very much depends on your design, but One RFC per model is definitely a no no.
Refer to this document to understand how should you use the model in most efficient way.
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/705f2b2e-e77d-2b10-de8a-95f37f4c7022?quicklink=events&overridelayout=true
Thanks
Prashant -
What is the best practice for using the Calendar control with the Dispatcher?
It seems as if the Dispatcher is restricting access to the Query Builder (/bin/querybuilder.json) as a best practice regarding security. However, the Calendar relies on this endpoint to build the events for the calendar. On Author / Publish this works fine but once we place the Dispatcher in front, the Calendar no longer works. We've noticed the same behavior on the Geometrixx site.
What is the best practice for using the Calendar control with Dispatcher?
Thanks in advance.
ScottNot sure what exactly you are asking but Muse handles the different orientations nicely without having to do anything.
Example: http://www.cariboowoodshop.com/wood-shop.html -
Best Practice for Initial Load Data
Dear Experts,
I would like to know the best practices or factors to be concerned when performing initial load
For example,
1) requirement from business stakeholders for data analysis
2) age of data to meet tactical reproting
3) data dependency crossing sap modules
4) Is there any best practice for loading master data?HI ,
check this links
Master Data loading
http://searchsap.techtarget.com/guide/allInOne/category/0,296296,sid21_tax305408,00.html
http://datasolutions.searchdatamanagement.com/document;102048/datamgmt-abstract.htm
Regards,
Shikha -
Best practices for using the knowledge directory
Anyone know when it is best to store docs in the Knowledge Directory versus Collab? They are both searchable, but I guess you can publish from the Publisher to the KD. Anyone have any best practices for using the KD or setting up taxonomies in the KD?
Hi Richard,
If you need to configure dynamic pricing that may vary by tenant and/or if you want to set up cost drivers that are service item attributes, you should configure Billing Tables in the Demand Management module in 10.0.
The cost detail functionality in 9.4 will likely be changed to merged with the new pricing feature in 10.0. The current plan is not to bring cost detail into the Service Catalog module. -
Best practices for using the 'cost details' fields
Hi
Please could you advise us to the best practices for using the 'cost details' field within Pricing. Currently I cannot find the way to surface the individual Cost Details fields within the Next Generation UI, even with the tick box for 'display both cost and price' ticked. It seems that these get surfaced when the Next Generation UI is turned off, but cannot find them when it is turned on. We can see the 'Pricing Summary' field but this does not fulfill our needs, as some of our services have both recurring and one-off costs.
Attached are some screenshots to further explain the situation.
Many thanks,
Richard ThorntonHi Richard,
If you need to configure dynamic pricing that may vary by tenant and/or if you want to set up cost drivers that are service item attributes, you should configure Billing Tables in the Demand Management module in 10.0.
The cost detail functionality in 9.4 will likely be changed to merged with the new pricing feature in 10.0. The current plan is not to bring cost detail into the Service Catalog module. -
Best Practice for disparately sized data
2 questions in about 20 minutes!
We have a cache, which holds approx 80K objects, which expired after 24 hours. It's a rolling population, so the number of objects is fairly static. We're over a 64 node cluster, high units set, giving ample space. But.....the data has a wide size range, from a few bytes, to 30Mb, and everywhere in between. This causes some very hot nodes.
Is there a best practice for handling a wide range of object size in a single cache, or can we do anything on input to spread the load more evenly?
Or does none of this make any sense at all?
Cheers
AAngel 1058 wrote:
2 questions in about 20 minutes!
We have a cache, which holds approx 80K objects, which expired after 24 hours. It's a rolling population, so the number of objects is fairly static. We're over a 64 node cluster, high units set, giving ample space. But.....the data has a wide size range, from a few bytes, to 30Mb, and everywhere in between. This causes some very hot nodes.
Is there a best practice for handling a wide range of object size in a single cache, or can we do anything on input to spread the load more evenly?
Or does none of this make any sense at all?
Cheers
AHi A,
It depends... if there is a relationship between keys and sizes, e.g. if this or that part of the key means that the size of the value will be big, then you can implement a key partitioning strategy possibly together with keyassociation on the key in a way that it will evenly spread the large entries across the partitions (and have enough partitions).
Unfortunately you would likely not get a totally even distribution across nodes because of having fairly small amount of entries compared to the square of the number of nodes (btw, which version of Coherence are you using?)...
Best regards,
Robert -
Best Practices for Using Photoshop (and Computing in General)
I've been seeing some threads that lead me to realize that not everyone knows the best practices for doing Photoshop on a computer, and in doing conscientious computing in general. I thought it might be a good idea for those of us with some exprience to contribute and discuss best practices for making the Photoshop and computing experience more reliable and enjoyable.
It'd be great if everyone would contribute their ideas, and especially their personal experience.
Here are some of my thoughts on data integrity (this shouldn't be the only subject of this thread):
Consider paying more for good hardware. Computers have almost become commodities, and price shopping abounds, but there are some areas where spending a few dollars more can be beneficial. For example, the difference in price between a top-of-the-line high performance enterprise class hard drive and the cheapest model around with, say, a 1 TB capacity is less than a hundred bucks! Disk drives do fail! They're not all created equal. What would it cost you in aggravation and time to lose your data? Imagine it happening at the worst possible time, because that's exactly when failures occur.
Use an Uninterruptable Power Supply (UPS). Unexpected power outages are TERRIBLE for both computer software and hardware. Lost files and burned out hardware are a possibility. A UPS that will power the computer and monitor can be found at the local high tech store and doesn't cost much. The modern ones will even communicate with the computer via USB to perform an orderly shutdown if the power failure goes on too long for the batteries to keep going. Again, how much is it worth to you to have a computer outage and loss of data?
Work locally, copy files elsewhere. Photoshop likes to be run on files on the local hard drive(s). If you are working in an environment where you have networking, rather than opening a file right off the network, then saving it back there, consider copying the file to your local hard drive then working on it there. This way an unexpected network outage or error won't cause you to lose work.
Never save over your original files. You may have a library of original images you have captured with your camera or created. Sometimes these are in formats that can be re-saved. If you're going to work on one of those files (e.g., to prepare it for some use, such as printing), and it's a file type that can be overwritten (e.g., JPEG), as soon as you open the file save the document in another location, e.g., in Photoshop .psd format.
Save your master files in several places. While you are working in Photoshop, especially if you've done a lot of work on one document, remember to save your work regularly, and you may want to save it in several different places (or copy the file after you have saved it to a backup folder, or save it in a version management system). Things can go wrong and it's nice to be able to go back to a prior saved version without losing too much work.
Make Backups. Back up your computer files, including your Photoshop work, ideally to external media. Windows now ships with a quite good backup system, and external USB drives with surprisingly high capacity (e.g., Western Digital MyBook) are very inexpensive. The external drives aren't that fast, but a backup you've set up to run late at night can finish by morning, and if/when you have a failure or loss of data. And if you're really concerned with backup integrity, you can unplug an external drive and take it to another location.
This stuff is kind of "motherhood and apple pie" but it's worth getting the word out I think.
Your ideas?
-NoelAPC Back-UPS XS 1300. $169.99 at Best Buy.
Our power outages here are usually only a few seconds; this should give my server about 20 or 25 minutes run-time.
I'm setting up the PowerChute software now to shut down the computer when 5 minutes of power is left. The load with the monitor sleeping is 171 watts.
This has surge protection and other nice features as well.
-Noel -
JSF - Best Practice For Using Managed Bean
I want to discuss what is the best practice for managed bean usage, especially using session scope or request scope to build database driven pages
---- Session Bean ----
- In the book Core Java Server Faces, the author mentioned that most of the cases session bean should be used, unless the processing is passed on to other handler. Since JSF can store the state on client side, i think storing everything in session is not a big memory concern. (can some expert confirm this is true?) Session objects are easy to manage and states can be shared across the pages. It can make programming easy.
In the case of a page binded to a resultset, the bean usually helds a java.util.List object for the result, which is intialized in the constructor by query the database first. However, this approach has a problem: when user navigates to other page and comes back, the data is not refreshed. You can of course solve the problem by issuing query everytime in your getXXX method. But you need to be very careful that you don't bind this XXX property too many times. In the case of querying in getXXX, setXXX is also tricky as you don't have a member to set. You usually don't want to persist the resultset changes in the setXXX as the changes may not be final, in stead, you want to handle in the actionlistener (like a save(actionevent)).
I would glad to see your thought on this.
--- Request Bean ---
request bean is initialized everytime a reuqest is made. It sometimes drove me nuts because JSF seems not to be every consistent in updating model values. Suppose you have a page showing parent-children a list of records from database, and you also allow user to change directly on the children. if I hbind the parent to a bean called #{Parent} and you bind the children to ADF table (value="#{Parent.children}" var="rowValue". If I set Parent as a request scope, the setChildren method is never called when I submit the form. Not sure if this is just for ADF or it is JSF problem. But if you change the bean to session scope, everything works fine.
I believe JSF doesn't update the bindings for all component attributes. It only update the input component value binding. Some one please verify this is true.
In many cases, i found request bean is very hard to work with if there are lots of updates. (I have lots of trouble with update the binding value for rendered attributes).
However, request bean is working fine for read only pages and simple binded forms. It definitely frees up memory quicker than session bean.
----- any comments or opinions are welcome!!! ------I think it should be either Option 2 or Option 3.
Option 2 would be necessary if the bean data depends on some request parameters.
(Example: Getting customer bean for a particular customer id)
Otherwise Option 3 seems the reasonable approach.
But, I am also pondering on this issue. The above are just my initial thoughts. -
Best practices for submitting CF data to an AJAX page?
Hi everyone,
I've got a project I'm working on for work and have hit a
little problem.
I am extracting data from my database and then after each
piece of data (just numbers, usually 10 chunks of numbers), I tack
a "|" onto the end of each number. Then, I output the data to the
page. Back on my AJAX enabled page, I get the "responseText" from
that page, and then split it up using javascript and the
pre-inserted "|".
This seems to work fine, but it is quite a bit more messy.
Also, It would really, really be nice to be able to do sorting and
various other operations on the data with javascript instead of
having to rely on CF's icky code logic.
Can someone please enlighten me as to best practices for this
type of thing? I get the suspicion that I'll probably be using XML
somehow, but I'd your opinion.
Thanks!Check out the Samples and Documentation portions of Adobe's
Spry website for client side use of JSON with Spry.
http://labs.adobe.com/technologies/spry/home.html
Here is link to Adobe's Spry Forums:
http://www.adobe.com/cfusion/webforums/forum/categories.cfm?forumid=72&catid=602
If you are using CF8 you can use the SerializeJSON function
to convert a variable to JSON. You might also be interested in the
cfsprydataset tag. CF 8 documentation:
http://livedocs.adobe.com/coldfusion/8/htmldocs/
If you are using a previous version of CF there is 3rd party
JSON support. You can find links at
http://json.org.
Maybe you are looking for
-
Match Codes / Search Helps for Duplicate Check Vendor are missing
Hi Guys, I'm trying to enable the Duplicate Check for the Vendor solution we have here and I couldn't help, but notice that when I'm setting up the DB Search in "Define Search Application", the out of the box values for existing search helps are alre
-
Coa issue with Cisco ISE 1.2
Hi, i am currently implementing webauth with Cisco ISE for self register, but i am having issue coa. I was able to get non-windows machine to work but with windows i can't push out the url redirection through coa. I have enabled debug and i can see
-
Content not appearing after first time
I could really do with some help on this one. I have a screen (State) in my application that displays some custom mxml components using a repeater. The first time I come to this screen, all the data appears correctly. However if I navigate away from
-
My phone died. How can I get my contacts from the icloud onto my mac?
My phone died. How can I get my contacts from the icloud onto my mac?
-
Can't figure out why tweener won't work after an loader is added to stage
Hey all, I am working on an imageHolder class that will let me load an image into and then when the image has finished loading I want the default graphic that is a placeholder to fade out. Everything is working except one thing when the image has fin