Rating indicator in an ObjectHeader?

Just wanted to know how can we add rating indicator in an ObjectHeader?

Hi Priya,
Initially I thought that we can add rating indicator and coded as JS Bin - Collaborative JavaScript Debugging</title> <link rel="icon" href="http://static.jsbin.…
But it is not getting added. then I refereed sapui5 documentation and as per that SAPUI5 SDK - Demo Kit
The list of Object sap.ui.core.Control. It will only allow sap.m.ObjectStatus and sap.m.ProgressIndicator controls.
Hence I believe, you cannot  not add rating indicator.
Regards,
Chandra

Similar Messages

  • Why can I not get v30 to have the same layout as v26?

    In v26.0 of Firefox, I configured my menu bar ("File", "Edit", etc) to contain the following after the menu items:
    - back button
    - forward button
    - reload/stop button
    - home button
    - progress indicator
    - site rating indicator
    - address bar
    - tab-list drop down
    This kept Firefox nicely uncluttered. After upgrading to v30.0:
    - the address bar has to be below the tabs (ie it cannot be moved to the menu bar)
    - there are no forward/back buttons (they appear to be part of the address bar)
    - there does not appear to be a reload/stop button
    - there does not appear to be a progress indicator
    - there does not appear to be a site rating indicator
    This is shit. How the fuck can this be called progress???

    "You can look at the Classic Theme Restorer extension to restore some functionality that was lost with the arrival of the Australis style in Firefox 29."
    Precisely. How is "loss of functionality" progress or an upgrade? Why should I have to install an extension just to get a "reload" button? What a pile of crap. Why not have a "Use shit interface" tickbox in the Options dialogue so people can choose if they want their version of Firefox to have pants functionality?

  • Power supply troble

    I picked up a used toshiba a25s207 and bought a new power supply cord for it. When I plug it in it still does not power up, but I get a yellow flashing power icon on the front of the unit. Can anyone tell me what the issue is?

    If the orange light flashes three times it means battery charge is low. If you do not see the charger Light. It means the power cord (you mean the power adaptor?) does not work or is not compatible with your unit.
    Make sure you have the same power rating indicated at the Toshiba Label at the bottom. Best to get the AC adaptor direct from Toshiba.

  • AGP G4/400 fans growling at me

    I have a 400 mhz AGP G4 that I use as my main computer. Now I have noticed it is making quite a large amount of noise. The fans are quite loud and im not sure which one is the noisiest but im pretty sure they both are quite loud. Some times when I start up the computer the fans make a growling noise while beggining to spin up. I've stuck a hair dryer in there and blown out all the dust in the case and it is still noisy. Would it be inexpensive/easy to replace the case fan/PSU fan, and if so what are the types of fans I would need?
    thanks,
    Allen

    Have you used compressed air to clean out any accumulated dust in and around the fans? If dust isn't a contributing cause of the noise, you may wish to replace them. This recent Topic discussed replacement of the fans in the G4-AGP model. The cost isn't high and keeping the internal temperatures down is very important. To quiet noisy fans, some have reported success by removing the foil label on the back of the fan, applying a light lubricant, and re-applying the label or piece of vinyl tape (to keep the lubricant from dripping out). I've had to do this for some of those non-standard, mini-fans on graphics cards. If a fan has gotten noisy, either the sleeve bearing or ball bearing race has probably gotten worn from friction. With large, industry-standard fans readily available, I'd rather replace a noisy one. The case fan isn't such a headache to remove and try lubricating, but if you go to the trouble of removing the power supply to access the fan for a temporary fix like this, your time would be better spent by just replacing it. A lubricated, older fan will never compare to a new one. As I mentioned in another Topic, you want to purchase fans with high airflow for the speed, but pay attention to the noise level (dBA rating) indicated. Cooling fans should do their job, without being noticed.

  • Vendor rating and insurance indicator table

    Hi all,
        In which table the fields vendor rating and insurance indicator zz_moc_rating & zz_insurance shown in EKKO stored?

    Hi Sathya,
    These are custom fields not created by SAP but by someone in your project.
    Just in EKKO if you have some Check Tables against these fields. If yes then these fields are getting populated from there. Else from some screen or program wherein user is entering some input.
    Regards,
    Chetan.
    PS: Reward points if this helps.

  • Chat feedback refused to submit a bad rating!

    Here's a summary - needless to say I'm annoyed
        Chat ID for this session is {edited for privacy}. (16:22:57)
    Live Chat Customer Satisfaction Survey
    Thanks for chatting. Please provide us with your feedback so we may serve you better.
    Required items indicated with *.
    1. Was your reason for chatting with us resolved today?*
     No
    2. Overall, how would you rate your experience using Verizon Live Chat service?*
     Poor
    3. How likely is it that you would recommend Verizon Live Chat service to a friend or colleague (where zero is not at all likely and ten is extremely likely)?*
    0
    4. Please indicate your preferred method of communicating with Verizon.*
    E-mail
    5. Please provide any additional feedback on your Verizon Live Chat experience.
        Cut off while formulating reply (@4:45pm - so not at end of the day!).
    Problem not addressed.

    Hey FuzzyK,
    I also got the same problem.
    I mostly answered the same as yours except the #3. I gave them 1. I tried to submit it but it did not go through. However, when I changed the answer of Question 1 to YES, it went through!
    Just wonder how Verizon is gonna improve their services if they refused the bad rating. 

  • Delivery rating in vendor evaluation

    Can I know on what logic system calculates the delivery rating, for schedule agreement which has more than one schedule lines.
    Regards,
    M.M

    Dear Magesh,
    This is for On time delivery:
    ===========================================================
    At the time of a goods receipt against a purchase order, the system checks whether the delivery was on time or late:
    GR date - delivery date = date variance
    If the delivery was on time, the system checks whether a minimum delivery quantity is to be taken into account.
    It checks the Minimum delivery quantity/standardizing value for delivery time variance from material indicator.
    – If the indicator has been selected, it checks whether a value has been maintained in the material master record.
    If a value exists, it is used. If no value exists, the value from the Minimum delivery percentage field is used. If there no such value in this case either, goods receipts of fractions of the order quantity are included in the calculation of the score.
    – If the indicator has not been selected, the value from the Minimum delivery percentage field is used. If no value exists, goods receipts of fractions of the order quantity are included in the calculation of the score.
    If the minimum delivery quantity is not reached, the system does not determine a score for the goods receipt.
    If the minimum delivery quantity is reached and the delivery date adhered to, the variance is zero and the system awards the highest points score for the relevant goods receipt.
    If the delivery is not on time, the system calculates the date variance in days and converts it into a percentage variance.
    In the process, the system searches for the standardizing value as follows:
    It checks the Minimum delivery quantity/standardizing value for delivery time variance from material indicator.
    – If the indicator has been selected, it checks whether a value has been maintained in the material master record.
    If a value exists, it is used. If not, the value from the Standardizing value for delivery time variance field is used. If there is no such value in this case either, one of the following values is used:
    – in the case of goods receipts against scheduling agreement releases: the firm zone
    – in the case of goods receipts against purchase orders: the difference between the statistical delivery date and the order date
    – If the indicator has not been selected, the value from the Standardizing value for delivery time variance field is used. If there is no such value, one of the following values is used
    – in the case of goods receipts against scheduling agreement releases: the firm zone
    – in the case of goods receipts against purchase orders: the difference between the statistical delivery date and the order date
    The system then awards the score you defined in Customizing for this percentage variance.
    The new score is then included in the vendor's previous score for the subcriterion. To calculate the new score for the subcriterion from the already existing composite score and this new individual score, the system applies the smoothing factor Date variance defined in Customizing.
    When you run a new evaluation, the system calculates the average of the individual scores for all materials. The result is the vendor's score for "On-Time Delivery Performance".
    If a goods receipt covers several schedule lines, MM Vendor Evaluation performs this calculation for each schedule line.
    This results in more than one score for the single goods receipt. Each of these scores is multiplied by the quantity delivered against the relevant schedule line and is thus weighted by quantity.
    The sum of the weighted points scores is divided by the total goods receipt quantity. The result is the score for this one goods receipt.
    The score for the goods receipt is smoothed and included in the existing score for on-time delivery performance for the vendor
    Adherence to Conf. Date :
    ===========================================================
    1. When a goods receipt is posted for an order item, the system checks whether open inbound deliveries exist for the order item, i.e. inbound deliveries for which no goods receipts have yet been recorded.
    If open inbound deliveries exist, the system checks whether the date previously notified by the vendor has been adhered to or whether the goods receipt date differs from the date thus confirmed:
    GR date - date confirmed by vendor in shipping notification = date variance
    2. If the delivery was on time, the system checks whether a minimum delivery quantity has been defined in Customizing.
    a. If you have not set a minimum delivery quantity, goods receipts representing partial deliveries will also be included in the calculation of the score.
    b. If the minimum delivery quantity is not reached, the system does not determine a score for the goods receipt.
    c. If the minimum delivery quantity is reached and the delivery is on time, the variance is zero. This means that the system awards the maximum score for the goods receipt.
    3. If the delivery date is not in accordance with the notified one, the system calculates the date variance in days and converts it into a percentage variance.
    4. The system then awards the score you defined in Customizing for this percentage variance.
    5. The new score is then merged with the vendor's previous score for the subcriterion. To calculate the new score for the subcriterion from the already existing composite score and this new individual score, the system applies the smoothing factor On-time delivery performance/Timeliness of service provision (defined in Customizing for Vendor Evaluation under Purchasing organization data for Vendor Evaluation).
    6. When you run a new evaluation, the system calculates the average of the individual scores for all materials. The result is the vendor’s score for adherence to the confirmation date.
    If a goods receipt covers several schedule lines, MM Vendor Evaluation performs this calculation for each schedule line.
    This results in more than one score for the single goods receipt. Each of these scores is multiplied by the quantity delivered against the relevant schedule line and is thus weighted by quantity.
    The sum of the weighted points scores is divided by the total goods receipt quantity. The result is the score for this one goods receipt.
    The score for the goods receipt is smoothed and merged with the existing score for adherence to the confirmation date for the vendor.
    Regards,
    w1n

  • Bigger 'Rejected' Indicator in Loupe View?

    Any way to get a larger, solitary flag indicator in the loupe view?
    Kind of like taking a red Sharpie and drawing a line through rejects on a contact sheet or proofs. Very rapid to discern and move on.
    (The custom loupe overlays include the flag status, but they're lumped in with the label and star rating (which I don't use), and are tedious to read.)
    Maybe a plug-in? Or is this a feature request? Thanks.

    Any way to get a larger, solitary flag indicator in the loupe view?
    No.
    Why is this a concern? Lightroom gives you several ways to find the items with a rejected flag.

  • Thumbnail play indicator on mp3 disappeared

    I haven't used bridge too much so excuse me if this is a dumb question, but I can't figure it out. I have PS CS5 and in full Bridge I was looking at a bunch of mp3 files and trying out different settings like marking them for review, etc. When I clicked on any mp3 thumbnail it would automatically start playing with a play indicator below the thumb's rating and name. They were all working. I then selected one thumb mp3 and gave it a rating of 4 and the thumb changed and has no indication of the play indicator and no longer automatically plays when I select it. The other thumbs still play and I can change their ratings and they still play and show the indicator. This one file though now only plays if I right click and selct open which opens a pop up window. It still shows the star ratings and name but not the play indicator. I tried purging the cache, it isn't locked and I can see nothing else that would cause it to suddenly not play automatically.
    What am I doing wrong?  Thanks,

    I'm not talking about embeded file EXIF or IPCT data. I was concerned about the Bridge data added... ratings, notes, catagories, etc., and where they are all kept? I would want that backed up so that if the data base becomes corrupted I don't lose all the work and time I have spent on it, along with a way to easily resore it. Normally, there is no way to even see the path you mentioned. When I uncheck the hide protected opperating system files I can see the folder for user\app data but I can't even open it, even as administrator. Maybe something to do with Windows 7?  Anyway, I would have thought there would be a built in data back up like ACDSee has in it's browsers.
    I am leary of doing a reset as I am afraid I will lose all the set up preferences I have and I have no idea what else it might reset. I did look on line at Adobe but haven't found any clear information about this. I have suffered data loss years ago with early versions of ACDSee before they improved their back up and restore features of their data base and it is a huge loss when you no longer have all the key words, catagories, ratings, etc. that you have spent endless hours on. They now have the option of using "side car" data files that move with the files, I think this is an excellent idea.
    Thanks,

  • Survey - Rating Factor - CRM

    Hello
    I need to recover the value of the questions (Rating factor) and answers (Rating) of questionnaires, to create an indicator of the made surveys.
    You have an example?
    help me please!
    I appreciate it in advance
    Message was edited by:
            Raul Alvarado

    Hello Raul,
    Sorry, I am still bit confused. But let me try to help.
    In transaction CRM_SURVEY_SUITE
    if you open an survey in change mode, then expand the node at left hand in "Answer" area, if you select each answer option, on the right side, you will find a field named as "Rating", here you can provide rating values.
    if you would like to enable end user to give up the survey result, and start to fill the survey from brand new, you can add a pushbutton with "Function Code" = RESET.
    Hope these could do help!
    Hongyan

  • Using Aperture's star rating system

    Forum member mrhooper posted [something in another thread|http://discussions.apple.com/thread.jspa?messageID=13357908#13357908] which caught my eye. I didn't have time to respond, and now that thread is marked answered -- and my reply is slightly off-topic -- so I started this new thread in the hopes of learning how better to use the star rankings in Aperture.
    Below is +what I do.+ I'm very curious to find out what you do -- or to hear any suggestions for making what I do more sensible.
    Here is (part of) mrhooper's comment.
    So we are now bringing over everything into Aperture and then rating images on a 1, 2, 4, 5 basis. Come on, if its a 3 make a decision. Everything else is a 9, and then the rejects are deleted. And then the trash is emptied and they are out of the library.
    So we bring in the raw and then quickly rate, and I can't image a 400 shoot taking much more than 5 minutes rate.
    Go through once for 1, then for 2, by then the 4 is obvious and the 5 we just love anyway. 9 everything else.
    Fwiw (always an auspicious start to an Internet posting), I TOTALLY agree re: the wasted gamut of the seven-level rating system (that's rejected-unrated-1-2-3-4-5).
    Here's what I've settled on:
    • If I look at a picture, I rate it. Reject or save-and-decide-later. That's "9" and "1". So "unrated" is meaningful to me. It doesn't mean "less than rated", and, importantly, it doesn't mean "not-rejected"; it means, literally, not-yet-rated. (The problem solved here occurs when I am part way through tossing rejects from a Project and I am interrupted.)
    Immediately after import, I add a Project description, assign a Place, and Stack images taken in burst mode or with any range of exposure settings. When I rate images in a Project, the first thing I do is go through these Stacks and select the most usable image from these minor variations (I need to do this before comparing un-like images in the Project). Within any Stack, I might have rejects ("9") and saves (1-star); the "pick" of the Stack gets a 2-star rating (1-star is already in use).
    • 1- and 2-star ratings +have the same value to me as images.+ The only difference is that 2-star images came from Stacks. (Those Stacks may contain 1-star images which for any number of reasons I want to retain even though they are not a Stack pick and will likely never be promoted nor processed nor printed nor published).
    Now I can go through my Project and make my picks. I mark images to be developed in RED (I use the 8 color labels to track image development). All of these also get upgraded to 3- or 4-stars.
    • 3-star means "keep". The general idea is that in a few years time I will likely want to do some major weeding in my Aperture orchard. Any 1- or 2-star images will then be unceremoniously deleted. 3-star images will be kept.
    • 4-star means good enough for publication. One of the best of the Project.
    After I develop the images in a Project, I will re-assign the star rankings. During development, some images will be promoted.
    • 5-star means not only good enough to be published as one of the best of the Project (and I mean that at whatever level one works -- from emailed to friends to printed for sale), but good enough to be included in some other sampling of my work. (I keep an Album for my Portfolio. The images in the Album are selected from my 5-star images.
    Note that in my use, +star ratings are Project specific+. There is no value-equivalence between the star ratings -- and I consciously try to avoid comparisons across Projects. (If I've done a shoot, I need to know which are the usable/salable shots in that shoot -- I don't need to know how they rank in my life's work.) All of the 4-star images in one Project may be of less comparative value to me than the only 4-star image in another Project.
    So in my system, "Reject", "3-star", and "4-star" are the only ratings which are necessary. They correspond to the all important distinctions mrhooper indicated:
    - throw-out
    - keep but don't use
    - use.
    The other star ratings could -- and from a tidy database standard, should -- be replaced by a metadatum that is not part of the rating gamut.
    The orchard floor is yours ...
    Message was edited by: Kirby Krieger

    Hi Kirby,
    Thanks for opening this up, as I suspect most of us have gone through the "let's see that's a 1 or no... probably a 2, perhaps I can clean up the backgound and then it would be 3. " etc etc.
    We settled on a similar process where you suggest it is a project by project rating, not our life's work.
    So we skip 3, play with 4 and 5, that should be obvious.
    1 and 2, they are going to be deleted at some stage in the future, unless it is some rare thing, and that doesn't happen every shoot.
    In Jan this year looked at the 2008 year 1, 2 and deleted most of them. Hey, we hadn't used them, touched them, improved them or even thought about them. Goodbye.
    Some memorabilia, sentimental value ones got to stay.
    When we look for the best, then 5 is the go. Maybe a review of 4's but only if we are struggling in 5s
    Which by the way we tend to be fairly generous with I suppose.
    Checked a shot of 60 images early in the week, 15 are in the 4 and 5, 8 are 5.
    We also use the wonderful color system. Now this sorts out a lot of things for later smart albums.
    Here are a few Green-teaching. Red-for web site pages, Blue - the crown jewels, the ones we both love. Purple- spec shots for photo library etc.
    Yep, sometimes a shot might be two or more colors, but it is not stopping us at the moment.
    We do shoot burst, but don't stack. If its a 4 I want to be able to see it. We create albums in the project Star 4, Star 5, so there they are in a hurry. Yep, know it is easy to do that in the search box, and can use Ctlr1-5, but the clever is later when we can be really specific in search for albums with Star 5, or what ever.
    Just to finish off, the 4 and 5 get keywords, lots of them. From event, to person, to specific lighting, to species, or building type or.....
    want a pic of a 1937 Ford- in Blue, at the Last Cafe, in Spring, in the sunshine, front lit, side view?
    That is what Aperture does.
    and because of the way we use projects for each shoot, bet I could find it in about 10 seconds in quick review of the appropriate year or month as well.
    This is not the thread on project names, but we have from the beginning way back in iPhoto(5) put them in as a date and description.
    Cool thing now in Projects/Full Screen, is I can Filter by the date number so typing in 9012 gets me all the projects only from 2009, December. (9) 0 (12) Oh, am aware of what is going to happen when we have been round the scale once and the numbers come back. And we have a plan.
    I am sure there are other ways to start a good system, to use the ratings and the keywords of Aperture, and agree that you have to think project specific when it comes to what is a 5.
    Reserving 5 only for the 'very best image you ever took of a rose", is a bit limiting and what if my "Very best waterfall picture' is even better?
    Thanks for starting a good ideas discussion.
    Regards
    DJ

  • Sorting Application list in iTunes by rating

    folks.. is there anyway to sort apps in iTunes by rating? Most popular is a function of most purchased instead of highest rating..
    i tend to look at 'ratings by the masses' as one indicator of quality of program (stability and usability). Thx ..Wayne

    There is not, no. But, it's a good suggestion - you might consider suggesting that to Apple:
    http://www.apple.com/feedback/

  • Labview Forum Star Rating Seems Odd - For Those Blue Contribute​rs ;)

    Whilst browsing throught the fourms one day....
    I noticed the following indicated 1 star thread
    Absolutely Beginner in LabView.(But advanced in CVI) Need help! http://forums.ni.com/ni/board/message?board.id=170​&message.id=39090
    Now I read these one star threads because often the person giving one star either has a really good reason or .... its contoversial or... well anyway,
    I thought that the total seemed in error. That is to say that in the above thread the estimation on the summary was indicated as 1 star / 1 rating, whilst there actually appear to be 2 ratings. One of 5 stars (no supprises, it was for a contribution from Dennis Knutson) and another of one star for Wendy L (I would actually have probably clicked some more stars on that but I have left it alone for the moment).
    So why do the blue contributers get proprity over anyone else? or is it a bug? or has the sytem changed?
    Now I know the Star ratings are contentious, but if the thing can't even add!  I must have thousands of stars by now? or is it really zero after all?
    Firefox all patched
    IE 6 all patched

    Tst,
    your not alone, I can't rate those either....
    Ton
    Free Code Capture Tool! Version 2.1.3 with comments, web-upload, back-save and snippets!
    Nederlandse LabVIEW user groep www.lvug.nl
    My LabVIEW Ideas
    LabVIEW, programming like it should be!

  • Some assistance with Spry Rating Widget

    Hello, everyone.
    I have downloaded the Spry library from the repository, and am playing with it, trying to learn it.  So far, so good. 
    I have written a page that is using both the Spry Accordion Panel (I have 8, so far) and the Spry Rating Widget (one in each panel.)
    I am displaying (below the stars) a tally of ratings for each panel (I'm keeping track in an XML file.)  The 1-5 Star title, the number of times each is clicked, and a green bar (img) indicating percentage.
    When a star is clicked, the XML is updated, but you have to refresh the page to see the new values.  How can I dynamically, and in real time, update the value display when a star is clicked?  (Once I have that, I can figure out how to dynamically change the img size/percentage.)
    Thank you,
    ^_^

    WolfShade wrote:
    And I'm just starting to learn it, just because.  It's something different.  I'll probably never use any of it in any paid projects, but it's still nice to get a feel for it.
    I have never used the rating widget, but I do have the Spry example files from when they were originally released. I assume that you have managed to download the same files from GitHub.
    Open widgets.html in the widgets folder. Scroll down to the bottom, and click the Ratings > Overview link. Close to the bottom is a section titled "Update the Ratings Value Dynamically". The example code looks like this:
    <body>
    <span id="spryrating1" class="ratingContainer">
    <span class="ratingButton"></span>
    <span class="ratingButton"></span>
    <span class="ratingButton"></span>
    <span class="ratingButton"></span>
    <span class="ratingButton"></span>
    <input type="text" id="ratingValue" name="dynamic_rate" value="2"/>
    </span>
    <script type="text/javascript">
    var rate = new Spry.Widget.Rating("spryrating1", {ratingValueElement:"ratingValue", afterRating:'serverValue', saveURL:'SpryRating.php?id=spryrating5&val=@@rw_Rating@@'});
    </script>
    </body>
    Notice that the options object contains this: afterRating: 'serverValue'. Unfortunately, the sample files (at least the ones I've got) don't include a copy of SpryRating.php. However, I assume that it probably saves the selected value, calculates the average of all saved values, and then uses echo to output "serverValue=4.5" (or whatever the average is).

  • I'M AN IGNORANT IMPORTER! Want to add MPAA rating.

    I was wondering if anyone knows of any Applescripts or a program that would allow me to indicate a movie's MPAA rating (i.e. G, PG-13, R, etc...) I know that the rating is indicated when you purchase a movie from the iTunes store but it doesn't give you the option when adding an imported movie.

    Have a look here:
    http://dougscripts.com/itunes/
    I don't know if he has exactly what you want - there are over 400 scripts and add-ons there - if not you could always mail him with a suggestion.

Maybe you are looking for