PS 2010: Task Updating via PWA best practice?

Can anyone suggest a best method of tracking task progress via PWA?  In MS Project, you would expect task owners or project manager to enter the following:
Actual Start, Actual Finish, Actual Duration or Work, Remaining Duration or Work, and then “update” all un-started/finished work after the Status Date.
Is it possible to use the same update process using PWA?  I noticed that some fields mentioned above are greyed-out (can’t enter data) in PWA.
Any guidance will be much appreciated.

Project Server 2010 User --
I want to commend you for having the foresight to ask these important questions before your people actually begin using Project Server 2010.  It is the easiest to set up the default method of tracking progress in the beginning, and much harder to change
it to something else after you go live.  My answer is based on the assumption that you will NOT be using the Timesheet page in PWA, as this requires a totally different method of tracking than what you describe.
Based on your comments about how your PMs track progress in Microsoft Project 2010 desktop projects, you would want to select the Actual Work Done and Work Remaining method of tracking in Project Server 2010.  To prevent people from freelancing how
they enter progress, you should also select the Force Project Managers to Use the Progress Reporting Method Specified Above for All Projects option.  You should be able to leave all of the other options set to their default values on the Task Settings
and Display page.
Next, you should set up the Tasks page to include the following columns, along with other default columns that are necessary:
Actual Start
Actual Work
Remaining Work
Actual Finish
By doing this, you will capture the EXACT information in PWA that your PMs would be capturing in their projects in Microsoft Project.  The difference is that when team members enter their progress in PWA, your PMs will no longer need to manually enter
the progress in their Microsoft Project files.  The methodology you would need to require your team members to follow on the Tasks page in PWA would be this:
If you begin work on a new task this week, enter the date you began work on the task in the Actual Start field.
Enter the total amount of work done so far on each task in the Actual Work field.
Adjust the Remaining Work field value, if necessary, for any task.  Team members can increase or decrease the value as needed.
On the day you finish work on a task, enter this date in the Actual Finish field for the task.
If you change the Remaining Work value for any task, optionally add a note to the task document the reason for the adjustment.
Submit the task updates for the week on the last day of each reporting period.
Just some thoughts.  Perhaps others in this group will have some good ideas for you as well.  Hope this helps.
Dale A. Howard [MVP]

Similar Messages

  • Direct access to document image via webservice - best practice?

    Howdy,
       Rookie "Post-er".... be gentle.  I've searched and I've read... didn't find this.  I've not taken training.  I may be way off base.
    We have multiple manufacturing sites around the world with local Operator Qualification appls that want to access the enterprise SAP DMS mfg document images (stop dual maint - enterprise & local)....  without dependency on SAP ECC being up.  If we can pass them DMS meta-data via near-realtime msgs (including the document image access "key" data); then we would assume they can direclty access KPRO for the document image as needed.
    Would this be considered a "best practice"?   Webservice controlled (access only from known source).
    Is PHIO-ID the only dynamic variable needed (based on info below I found in WIKI)?
    *Alternatively, the url as below can be used to retrieve the documents from the content server[*http://<Server_Name>:<Port_No>/archive.dll?get&pVersion=<Version_No>&contRep=<Content_Rep>&docId=<PHIO_ID|http://<Server_Name>:<Port_No>/archive.dll?get&pVersion=<Version_No>&contRep=<Content_Rep>&docId=<PHIO_ID|\||]>
                <Server_Name> = Content server (IP address or name of content server)
                <Port_No> = Port No. of the server
                <Version_No> = Version No of content server (Transaction OACT)
                <Content_Rep> = content category of the server (Transaction OAC0)
                <PHIO_ID> = PHIO Id derived for each upload method

    Hello,
    This reply posted to help others who may refer this in the future.
    As Amit Maheshwari mentioned, it is possible to give access to a document stored in the Content Server even when the partner system(ECC) is down.
    As per your requirement, you want to access documents when the ECC System is down, but the content server is up and running.
    The following steps can be of help to you:
    1 - Create a program, that will create URL for the documents stored in the content server and send the URL to the external system.(Refer: FM- SDOK_PHIO_GET_URL_FOR_GET)
    The URL refers to the public IP of the Internet Server on which the content server is hosted on.
    2 - With this URL and few configurations on the Content Server (to allow access from external systems) you can fetch the file from the content server to the external system. Access privileges may vary based on your security settings.
    Note: Since the ECC system will be down, creating URL for all the documents can be a bottleneck.
    Probable Solution: While adding a document to the ECC system, the respective URL can be created and transferred to the external system along with the master details and stored there.
    This can be undertaken if it is really important for the business, as it will increase the load for the external system. The solution can be modified based on the business requirement.
    Thank you,
    Regards,
    Tamilnesan G

  • Not a question, but a suggestion on updating software and best practice (Adobe we need to create stickies for the forums)

    Lots of you are hitting the brick wall in updating, and end result is non-recoverable project.   In a production environment and with projects due, it's best that you never update while in the middle of projects.  Wait until you have a day or two of down time, then test.
    For best practice, get into the habit of saving off your projects to a new name by incremental versions.  i.e. "project_name_v001", v002, etc.
    Before you close a project, save it, then save it again to a new version. In this way you'll always have two copies and will not loose the entire project.  Most projects crash upon opening (at least in my experience).
    At the end of the day, copy off your current project to an external drive.  I have a 1TB USB3 drive for this purpose, but you can just as easily save off just the PPro, AE and PS files to a stick.  If the video corrupts, you can always re-ingest.
    Which leads us to the next tip: never clear off your cards or wipe the tapes until the project is archived.  Always cheaper to buy more memory than recouping lost hours of work, and your sanity.
    I've been doing this for over a decade and the number of projects I've lost?  Zero.  Have I crashed?  Oh, yeah.  But I just open the previous version, save a new one and resume the edit.

    Ctrl + B to show the Top Menu
    View > Show Sidebar
    View > Show Staus Bar
    Deactivate Search Entire Library to speed things up.
    This should make managing your iPhone the same as it was before.

  • Managing Tasks on Ethernet cDAQ - Best Practice

    Hi all,
    I'm currently working on a multi-station machine to perform functional product tests using an NI cDAQ 9188. Each station in the machine performs the same function but is intended to run independently. Basically a given station is loaded with a part, tested, sorted, a new part is installed and repeats. During this process other stations are undergoing the same procedure and sharing the same cDAQ. 
    During this testing digital signals (48) are turned on/off and analog signals (24) are being read between all stations. I've found that in contrary to a PCI task, the ethernet cDAQ tasks have quite a delay (~750 ms) when starting / stopping. Prior to this project I've only had experience using PCI tasks where I could read N Samples when needed without much start-up delay. Same goes for writing digital samples.
    The solution I've implemented for this is to create a continuous analog read VI that reads all analog signals and loops in the background continuously writing to a global variable. When a given station gets to a point where it needs to read signals, it pulls the desired index from the global array at that point in time. I also have a similar solution set up for the digital task, except it is reading a global variable for digital states and applies that to the task on each loop iteration. Normally I try to avoid continous loop senarios but the PC I'm using lets me get away with it. 
    Basically what I really need is subVI that starts the analog or digital task, but is idle until called by any module and returns N samples. This VI needs to act as a gate for analog read requests to avoid any reserved resource issue. I can't really think of how to accomplish this. 
    Any ideas?

    If you're looking for more control over the DAQmx task, perhaps you could utilize the DAQmx Control Task VI to control which state the task is in?
    http://digital.ni.com/public.nsf/allkb/3D9CF4F4A0549F40862574420029ECCD
    http://zone.ni.com/reference/en-XX/help/370466V-01/mxcncpts/taskstatemodel/
    Michael K.

  • Periodic task in web container best practice

    Hi All,
    My wep application has to monitor a specific directory on the hard disk for new incoming files (I know about the difficulties here - that's not the problem). Right now I'm planning to write a servlet which does nothing more but create a java.util.Timer in init() and schedule a java.util.TimerTask with it (and of course cancel() the timer in destroy()).
    How would you solve this?
    Uli

    Hi All,
    My wep application has to monitor a specific directory on the hard disk for new incoming files (I know about the difficulties here - that's not the problem). Right now I'm planning to write a servlet which does nothing more but create a java.util.Timer in init() and schedule a java.util.TimerTask with it (and of course cancel() the timer in destroy()).
    How would you solve this?
    Uli

  • Best Practices in SharePoint 2010 ( Out of the box feature vs Custom Web Part, development )

    Hi
    How do we differentiate on when to allow custom web parts and when to use out of the box.
    What are the performance issues involved when we deploy a custom web part into the SharePoint server. 
    Why do some companies prefer to allow only out of the box features, and no custom work is done?

    SharePoint is a powerful, flexible server product that can provide a wealth collaboration environment right out of box.
    Best answer for your question is depend upon your requirement. Sometime Out of Box features will solve all the problem with little designing. But sometime your requirement need a Custom Web part / solution.
    With OOTB implementation the big advantage is easy to trouble shoot & fix the issues. You will also found tons of blogs on internet for OOTB features. In custom development, its hard to troubleshoot & identifying whether its SharePoint issue or Custom
    code issue.
    check the below article for more ideas.
    http://social.technet.microsoft.com/Forums/sharepoint/en-US/1e7845ef-61e0-4d01-bb6c-92519e6d7139/sharepoint-2010-outofbox-best-practices?forum=sharepointgeneralprevious
    http://www.cdh.com/media/articles/Pages/SharePoint-out-of-the-box---To-customize-or-not-to-customize.aspx
    Master List of SharePoint 2010 On-Premises Custom Development Best Practices
    http://i.zdnet.com/whitepapers/Quest_WPW_SharepointDev_Custom_US_KS_v3.pdf
    Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog

  • Site Maintenance Task Best Practice

    As per our understanding,  we need to either enable "Clear Install Flag" task or "Delete Inactive Client Discovery Data" task.
    please do let us know, what will be consequences if we enabled the both tasks & what are the best practices.
    Prashant Patil

    Clear Install Flag
    task is highly dependent on heartbeat discovery. If you install client on computer and heartbeat sent the information to Site making its Install flag as Active in Database and at later stage ,If you uninstall client,still the Install Flag will be active
    until it is discovered by heartbeat Discovery. When the client is not discovered by Heartbeat discovery,Install Flag will be cleared.
    As a thumb rule,When
    enabling this task, set the Client Rediscovery period to
    an interval longer than the Heartbeat Discovery schedule.
    More information about how Clear Install Flag works is given here  http://myitforum.com/cs2/blogs/jgilbert/archive/2008/10/18/client-is-installed-flag-explained.aspx
    Delete Inactive Client Discovery Data:
    suggest you to look at technet document,its clearly explained http://technet.microsoft.com/en-us/library/bb693646.aspx 
    Eswar Koneti | Configmgr blog:
    www.eskonr.com | Linkedin: Eswar Koneti
    | Twitter: Eskonr

  • Best practice for server configuration for iTunes U

    Hello all, I'm completely new to iTunes U, never heard of this until now and we have zero documentation on how to set it up. I was given the task to look at best practice for setting up the server for iTunes U, and I need your help.
    *My first question*: Can anyone explains to me how iTunes U works in general? My brief understanding is that you design/setup a welcome page for your school with sub categories like programs/courses, and within that you have things like lecture audio/video files and students can download/view them on iTunes. So where are these files hosted? Is it on your own server or is it on Apple's server? Where & how do you manage the content?
    *2nd question:* We have two Xserve(s) sitting in our server room ready to roll, my question is what is the best method to configure them so it meets our need of "high availability in active/active mode, load balancing, and server scaling". Originally I was thinking about using a 3rd party load balancing device to meet these needs, but I was told there is no budget for it so this is not going to happen. I know there is IP Failover but one server has to sit in standby mode which is a waste. So the most likely scenario is to setup DNS round robin and put both xserves in active/active. My question now is (this maybe related to question 1), say that all the content data like audio/video files are stored by us, (We are going to link a portion of our SAN space to Xserve for storage), if we are going with DNS round robin and put the 2 servers in Active/Active mode, can both servers access a common shared network space? or is this not possible and each server must have its own storage space? And therefore I must use something like RSYNC to make sure contents on both servers are identical? Should I use XSAN or is RSYNC good enough?
    Since I have no experience with iTunes U whatsoever, I hope you understand my questions, any advice and suggestion are most welcome, thanks!

    Raja Kondar wrote:
    wht is the Best Practice for having server pool i.e
    1) having a single large serverpool consisting of "n" number of guest vm
    2) having a multiple small serverpool consisting of less of number of guest vm I prefer option 1, as this gives me the greatest amount of resources available. I don't have to worry about resources in smaller pools. It also means there are more resources across the pool for HA purposes. Not sure if this is Official Best Practice, but it is a simpler configuration.
    Keep in mind that a server pool should probably have up to 20 servers in it: OCFS2 starts to strain after that.

  • Best Practice Life Science Pharmaceuticals

    Dear all,
    I was wondering if there is any news about the Life Science Best Practice Pharma. The latest Pharma Best Practice dates back 2007.
    Maybe with 7.0 release there will also be an update on the best practice, but there is nothing to be found, regarding this subject.
    Anybody got some insight?
    Thanks.
    Regs,
    Ruud

    Would love to get some insights too!
    Agne
    [indijos viza voyage|http://www.voyage-voyage.lt/viza-i-indija/]

  • Data warehousing question/best practices

    I have been given the task of copying a few tables from our production database to a data warehousing database on a once-a-day (overnight) basis. The number of tables will grow over time; currently it is 10. I am interested in not only task success but also best practices. Here's what I've come up with:
    1) drop the table in the destination database.
    2) re-create the destination table from the script provided by SQL Developer when you click on the 'SQL' tab while you're viewing the table.
    3) INSERT INTO the destination table from the source table using a database link. Note: I am not aware of any columns in the tables themselves which could be used to filter added/deleted/modified rows only.
    4) After data import, create primary key and indexes.
    Questions:
    1) SQL Developer included the following lines when generating the table creation script:
    <table creation DDL commands>
    then
    PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE (INITIAL 251658240 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBLSPC_PGROW"
    it generated this code snippet for the table, the primary key and every index.
    Is this necessary to include in my code if they are all default values? For example, one of the indexes gets scripted as follows:
    CREATE INDEX "XYZ"."PATIENT_INDEX" ON "XYZ"."PATIENT" ("Patient")
    -- do I need the following four lines?
    PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE(INITIAL 60817408 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBLSPC_IGROW"
    2) Anyone with advice on best practices for warehousing data like this, I am very willing to learn from your experience.
    Thanks in advance,
    Carl

    I would strongly suggest not dropping and recreating tables every day.
    The simplest option would be to create a materialized view on the destination database that queries the source database and to do a nightly refresh of that materialized view. You could then create a materialized view log on the source table and then do an incremental refresh of the materialized view.
    You can schedule the refresh of the materialized view either in the materialized view definition, as a separate job, or by creating a refresh group and adding one or more materialized views.
    Justin

  • Best Practice for Updating Infotype HRP1001 via Class / Methods

    I want to update an existing (custom) relationship between two positions.
    For example I want
    Position 1 S  = '50007200'
    Position 2 S =  '50007202'
    Relationship = 'AZCR'
    effective today through 99991231
    Is there a best practice or generally accepted way for doing this using classes/methods rather than RH_INSERT_INFTY ?
    If so, please supply an example.
    Thanks...
    ....Mike

    Hi Scott
    You can use a BAPI to do that.
    Check the following thread:
    BAPI to update characteristics in Material master?
    BR
    Caetano

  • SCCM 2012 Update deployment best practices?

    I have recently upgraded our environment from SCCM 2007 to 2012. In switching over from WSUS to SCCM Updates, I am having to learn how the new deployments work.  I've got the majority of it working just fine.  Microsoft Updates, Adobe Updates (via
    SCUP)... etc.
    A few users have complained that the systems seem to be taking up more processing power during the update scans... I am wondering what the best practices are for this...
    I am deploying all Windows 7 updates (32 and 64 bit) to a collection with all Windows 7 computers (32 and 64 bit)
    I am deploying all Windows 8 updates (32 and 64 bit) to a collection with all Windows 8 computers (32 and 64 bit)
    I am deploying all office updates (2010, and 2013) to all computers
    I am deploying all Adobe updates to all computers... etc.
    I'm wondering if it is best to be more granular than that? For example: should I deploy Windows 7 32-bit patches to only Windows 7 32-bit machines? Should I deploy Office 2010 Updates only to computers with Office 2010?
    It's certainly easier to deploy most things to everyone and let the update scan take care of it... but I'm wondering if I'm being too general?

    I haven't considered cleaning it up yet because the server has only been active for a few months... and I've only connected the bulk of our domain computers to it a few weeks ago. (550 PCs)
    I checked several PCs, some that were complaining and some not. I'm not familiar with what the standard size of that file should be, but they seemed to range from 50M to 130M. My own is 130M but mine is 64-bit, the others are not. Not sure if that makes
    a difference.
    Briefly read over that website. I'm confused, It was my impression that WSUS was no longer used and only needed to be installed so SCCM can use some of the functions for its own purposes. I thought the PCs no longer even connected to it.
    I'm running the WSUS cleanup wizard now, but I'm not sure it'll clean anything because I've never approved a single update in it. I do everything in the Software Update Point in SCCM, and I've been removing expired and superseded updates fairly regularly.
    The wizard just finished, a few thousand updates deleted, disk space freed: 0 MB.
    I found a script here in technet that's supposed to clean out old updates..
    http://blogs.technet.com/b/configmgrteam/archive/2012/04/12/software-update-content-cleanup-in-system-center-2012-configuration-manager.aspx
    Haven't had the chance to run it yet.

  • Best practice for upgrading task definition without deleting task instances

    best practice for upgrading task definition in production system without deleting or terminating task instances
    If I try and update a task definition with task instances running I get the following error:
    Task definition 'My Task - Add User' may not be modified while there are active task instances
    Is there a best practice to handle this. I tried to force an update through the console but that didn't work. I tried editing the task from the debug page and got the same error.

    1) Rename the original task definition.
    2) Upload the new task definition with the original name.
    3) Later, after all the running tasks have timed out, delete the old definition.
    E.g., if your task definition is "myWorkflow":
    1) Rename "myWorkflow" to "myWorkflow-old-2009-07-28"
    2) Upload the new task definition as "myWorkflow".
    Existing tasks will stay linked to the original (renamed) workflow definition.
    New tasks will use the new definition.
    As the previous poster notes, depending on the changes you are making, letting the old task definitions stay active could have bad side-effects and might be better avoided.

  • Exchange 2010 - What is best practice for protection against corruption replication?

    My Exchange 2010 SP3 environment includes DAG with offsite passive copy.  DB is backed-up nightly with TSM TDP.  My predecessor also installed DoubleTake software to protect the DB against replication of malware or corruption to the passive MB
    server.  Doubletake updates offsite DB replica every 4-hours.  Understanding that this is ultimately a decision based on my company's risk tolerance, to the end, what is the probability of malware or corruption propagation due to replication? 
    What is industry best practice: do most companies have a 3rd, lagged copy of the DB in the DAG, or are 3rd party solutions such as DoubleTake commonly employed?  Are there other, better (and less expensive) options?

    Correct. If 8 days lagged copy is maintained then daily transaction log files of 8 days are preserved before replaying them to lagged database. This will ensure point-in-time recovery, as you can select log files that you need to replay into the database.
    Logs will get truncated if they have been successfully replayed into database and have expired their lagged time-stamp.
    Each database copy has a checkpoint file (.chk), which keeps track of transaction log files status.
    Command to check the Transaction Logs replay status:
    eseutil /mk <path-of-the-chk-file>  - (stored with the Transaction log files)
    - Sarvesh Goel - Enterprise Messaging Administrator

  • Best practice for updating ATWRT (Characteristic Value) in AUSP

    I've notice that when we change the Characteristic Value of a Classification, that it does not update in the MM record. We have to go into MM02 for each material number that references Char Value and manually change it for row in AUSP to get updated.
    Should i just create a report to Loop through and update table AUSP directly? Or is there a better way to do this via a function or BAPI etc? Wanting to know what best practice is recommended.

    Hi Scott
    You can use a BAPI to do that.
    Check the following thread:
    BAPI to update characteristics in Material master?
    BR
    Caetano

Maybe you are looking for

  • One website creates havoc with my iMac - what can be done?

    I am an administrator for our church website. When I go on it to work, after about 10 minutes either Safari or Firefox, whichever I'm using, freezes completely. I have to force quit and restart about every 10 minutes. In fact it isn't only that it fr

  • Mac Mini as a "Media Center"...Anyone doing this?

    I am considering purchasing a single core mini to use as a media center. If you are currently doing this, how is it workingout for you? How is the video quality on your television while viewing pics from iPhoto? How is the streaming of music from iTu

  • Request for Samsung NX10 support

    I didn't see any threads on this yet so I'd like to make a request for Adobe to add support for the new Samsung NX10 RAW format in Lightroom.

  • Int 2 Serializable

    I have a file with a construction: Serializable[] serializable = new int[n]; It was generated with jad.exe decompiler from .class file. When I use javac compiler (java version v1.2.2) it reports an error that the types java.io.Serializable and int is

  • IMac ISP failure.

    Snice the latest Yosemite update I am experiencing ISP failure On my 2014 iMac. Repaire with diagnostics but fails again. My iPad and phone are working fine. Router working fine. Help?