Retrieving Performance Metrics from Azure Redis Portal
We are evaluating our application performance on cloud. We are automating the process of running and collecting performance counters from the application (from web roles, sql azure database, redis cache).
We can get the metrics for Redis cache instance on the portal. But we want to retrieve the metrics from portal and use it for comparing the results between different tests run. What is the best way we can
achieve that?
Hi Harit,
Here is some background and an answer.
Redis is only available in the new preview Azure portal. The new portal uses a new underlying deployment mechanism called CSM. The libraries you mentioned do not support Redis. There is a new set of libraries (microsoft.azure.insights) that you have
to use. These libraries are in preview.
Please take a look at the SDK here:
https://www.nuget.org/packages/Microsoft.Azure.Insights/ The documentation is here:
http://msdn.microsoft.com/en-us/library/azure/dn802153.aspx
I have a sample which shows how you can get the data for Redis.
https://github.com/rustd/RedisSamples/blob/master/CustomMonitoring/Program.cs
Please make sure when you enable diagnostics for your Cache, then you have provided a storage account as well. This is a new feature that we are rolling out where the diagnostic data will be stored in a storage account specified by the user.
Hopefully this answers your question.
Similar Messages
-
I tried to deploy the Hello World JSP application to Azure from Eclipse following the instructions at http://msdn.microsoft.com/library/azure/hh690944%28VS.103%29.aspx. Got the following error when trying to import Publish Settings file downloaded from Azure
Management portal:
Importing MyMSDN.publishsettings file failed. Reason: failed to parse file. Ensure publish settings version is 1.0.
Could someone help me solve this problem? Thanks.Hi huihuang,
It seems there is some issue with your publishsettings file, could you open this file and copy the content here? Or please have a look at this article:
http://gauravmantri.com/2012/09/14/about-windows-azure-publish-settings-file-and-how-to-create-your-own-publish-settings-file/ for more information about it.
Best Regards,
Jambor
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Important performance metrics from Tivoli, MOM, etc. into Oracle database
Hi,
I'm brand new with all of this stuff. I'm trying to grab performance data from a bunch of performance tools (MOM, Tivoli, Load Runner, custom client tools, etc.) and import it into one big file in an Oracle database (the performance data would have to map). Then, ideally, I'd like to be able to link that data up to Excel somehow and make graphs from it. Is this feasible? Where can I go for information to help me out with this?
Thanks!Everything you've written is wrong. Well not just wrong but horribly wrong.
Here are my thoughts in no particular order.
1. Since are brand new you should push yourself back from the keyboard and educate yourself before you proceed. Is there anyone in your organization you can ask for help?
2. Your entire concept around "import it into one big file" is not a complete non sequitur it is essentially impossible. A relational database such as Oracle stores information in logical segments called tables. You need to design a table with columns that correspond with your data and comply with normalziation rules.
http://www.psoug.org/reference/normalization.html
3. Unless you are not under any governmental restrictions with respect to auditing and compliance you've go not business ever taking data out of a database and putting it into a toy like Excel. Get a report writer such as one of the tools Oracle sells or Crystal Reports, or Cognos, etc.
Where can you go for information? I'd suggest you start with the Oracle DBA who is managing the database. Buy him or her lunch and ask for help. Then go to your manager and ask for training.
My apology if this sounds harsh but sitting here reading this was roughly equivalent to watching two cars at 100kph heading toward each other on a one-lane road. -
Need to Retrieve Performance Details from System tables..
I want to retrieve the performance details (execution time , no of tables affected , no of rows selected) of a procedure that i execute, I tried searching the
System Tables and Monitor Views but i am not able to find out into which table all the performance data get stored for the procedure i execute.
Message was edited by: Tom FlanaganHi Vignesh ,
Primarily i would recommend you to go through this video
This is like a base for monitoring topics : Monitoring SAP HANA - Checklist | SAP HANA
Thanks,
Razal -
Accessing Performance Data from third party Application
I have a requirement where in I have to fetch the performance metrics from SAP through a Java application. I know there are some RFC's available in CCMS module. But the customer is not interested in CCMS module.
So, is there any way to achieve it?
If anyone had come across this situation, please let me know.
Thanks in Advance,
VijayHi vijay,
IBM tivoli will work for IBM servers,please follow this link may be it will help full to you.
http://help.sap.com/saphelp_nw70/helpdata/EN/42/e704f8999a3ee8e10000000a1553f7/frameset.htm
Regards,
Raju. -
How do I improve performance while doing pull, push and delete from Azure Storage Queue
Hi,
I am working on a distributed application with Azure Storage Queue for message queuing. queue will be used by multiple clients across the clock and thus it is expected that it would be heavily loaded most on the time in usage. business case is typical as in
it pulls message from queue, process the message then deletes the message from queue. this module also sends back a notification to user indicating process is complete. functions/modules work fine as in they meet the logical requirement. pretty typical queue
scenario.
Now, coming to the problem statement. since it is envisaged that the queue would be heavily loaded most of the time, I am pushing towards to speed up processing of the overall message lifetime. the faster I can clear messages, the better overall experience
it would be for everyone, system and users.
To improve on performance I did multiple cycles for performance profiling and then improving on the identified "HOT" path/function.
It all came down to a point where only the Azure Queue pull and delete are the only two most time consuming calls outside. I can further improve on pull, which i did by batch pulling 32 message at a time (which is the max message count i can pull from Azure
queue at once at the time of writing this question.), this returned me a favor as in by reducing processing time to a big margin. all good till this as well.
i am processing these messages in parallel so as to improve on overall performance.
pseudo code:
//AzureQueue Class is encapsulating calls to Azure Storage Queue.
//assume nothing fancy inside, vanila calls to queue for pull/push/delete
var batchMessages = AzureQueue.Pull(32); Parallel.ForEach(batchMessages, bMessage =>
//DoSomething does some background processing;
try{DoSomething(bMessage);}
catch()
//Log exception
AzureQueue.Delete(bMessage);
With this change now, profiling results show that up-to 90% of time is only taken by the Azure Message delete calls. As it is good to delete message as soon as processing is done, i remove it just after "DoSomething" is finished.
what i need now is suggestions on how to further improve performance of this function when 90% of the time is being eaten up by the Azure Queue Delete call itself? is there a better faster way to perform delete/bulk delete etc?
with the implementation mentioned here, i get speed of close to 25 messages/sec. Right now Azure queue delete calls are choking application performance. so is there any hope to push it further.
Does it also makes difference in performance which queue delete call am making? as of now queue has overloaded method for deleting message, one which except message object and another which accepts message identifier and pop receipt. i am using the later
one here with message identifier nad pop receipt to delete message from queue.
Let me know if you need any additional information or any clarification in question.
Inputs/suggestions are welcome.
Many thanks.The first thing that came to mind was to use a parallel delete at the same time you run the work in DoSomething. If DoSomething fails, add the message back into the queue. This won't work for every application, and work that was in the queue
near the head could be pushed back to the tail, so you'd have to think about how that may effect your workload.
Or, make a threadpool queued delete after the work was successful. Fire and forget. However, if you're loading the processing at 25/sec, and 90% of time sits on the delete, you'd quickly accumulate delete calls for the threadpool until you'd
never catch up. At 70-80% duty cycle this may work, but the closer you get to always being busy could make this dangerous.
I wonder if calling the delete REST API yourself may offer any improvements. If you find the delete sets up a TCP connection each time, this may be all you need. Try to keep the connection open, or see if the REST API can delete more at a time
than the SDK API can.
Or, if you have the funds, just have more VM instances doing the work in parallel, so the first machine handles 25/sec, the second at 25/sec also - and you just live with the slow delete. If that's still not good enough, add more instances.
Darin R. -
1)
Is there North Bound Interface / API from SAP Solution Manager available for 3rd party integration?
i. The list of the modules that are being managed by SAP Solution Manager(s)
ii. The performance metrics of those modules/components at the high level
iii. The information about Early Watch Alerts (or situations to watch for)
2)
Is there a full SNMP interface for getting the above information from SAP Solution Manager?
3)
Is that understanding that SAP has SNMP support for forwarding alerts to a 3rd party system, correct?
4)
Does SAP has both free and licensed? If yes then what are the advantages of licensed over the open/free version?Mugunthan
Yes we have applied 11i.AZ.H.2. I am getting several errors still that we trying to resolve
One of them is
===========>>>
Uploading snapshot to central instance failed, with 3 different messages
Error: An invalid status '-1' was passed to fnd_concurrent.set_completion_status. The valid statuses are: 'NORMAL', 'WARNING', 'ERROR'FND at oracle.apps.az.r12.util.XmlTransmorpher.<init>(XmlTransmorpher.java:301)
at oracle.apps.az.r12.extractor.cpserver.APIExtractor.insertGenericSelectionSet(APIExtractor.java:231)
please assist.
regards
girish -
Deleting a Web app from the Azure Management Portal
When I selected a Web app in Azure Management Portal and clicked Delete at the bottom, the message "Loading" would hang there for hours without actually deleting that Web app. I just got started with the free trial. Am I missing something?
Thanks, PaulHi,
Have you linked SQL Azure Database to the webapp, please check and unlink the SQL database and then try to delete the webapp from the portal.
Clear cookies and cache files from Internet Explorer and let us know the results.
Girish Prajwal -
I can login to the management portal, but when it starts populating the services, the page goes grayscale and I get the message "You
have signed out elsewhere. Click OK to log out from the management portal." with only an OK button. This is the same issue that others were having on Feb. 27th referenced in
this thread. Currently, this is only happening in my Chrome browser, but works in IE.Hi,
From the thread that you provided, we can see that this issue was fixed, up to now I haven't heard the same issue, based on your description, this issue was only happening in Chrome, I tested this on my local (create a new website in chrome), it worked as
normal, and I didn't reproduce this issue, so I think this may be not a bug in chrome, please try to clean session and cookie, or use another machine test it, if issue was still exist, please try to contact with support with detail information, Please
contact support team by creating a support ticket at
http://www.windowsazure.com/en-us/support/contact/ Or if that doesn't work because you don't have an active subscription you will need to contact general customer support to have them create a support ticket for you
http://support.microsoft.com/gp/customer-service-phone-numbers?wa=wsignin1.0
Best Regards
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Troubles Exporting SQL databases from Azure
1. If I connect via SQL Server Management Studio and choose Export Data Tier Application to local disk, it never ends
2. If I export via Azure web portal, it takes few hours
3. The only way that works is using SQL Server Management Studio and exporting to Azure container and then downloading it and importing via SQL Server Management Studio, seems like a lot of hassle
I need Azure SQL database to be transferred daily to my local machine, preferably automatically. How can I do it?Hi Alex,
Since the issue regards Windows Azure SQL Database, I will help you move the question to the related forums, It is appropriate and more experts will assist you.
As Olaf’s post, if you want to migrate databases between the SQL Server Database Engine and Windows Azure SQL Database, you can use the Windows Azure SQL Database Import and Export operations to copy databases between two different SQL Database servers.
In addition, you can also use the Windows Azure SQL Database copy database feature to make a consistent copy of a database, and perform the export from the copy. For more information, see
Copying Databases in Windows Azure SQL Database. And you also can migrate a database by using the generate scripts wizard. For more information, see:
How to: Migrate a Database by Using the Generate Scripts Wizard (Windows Azure SQL Database)
According to your description, you want to migrate database daily and automatically, you can configure automated exports to schedule export operations for a SQL database, and to specify the frequency of export operations, and to set the retention period
to store export files. Also you can set data sync between SQL Server and Windows Azure SQL database, configure bi-directional for database. For more information, see:
Windows Azure SQL Data Sync.
Regards,
Sofiya Li
Sofiya Li
TechNet Community Support -
How long before user returns chart in azure preview portal
Hi,
I am trying to figure out how to add "How long before user returns" chart from VS Online AI dashboard to the azure preview portal. Any advice how to do that?
Thanks,
PVTry out these steps on Azure Preview Portal:
1. On Application Overview blade select Users chart
2. On opened Metrics Explorer blade select grid under charts
4. In "Group by" combobox select a property by which to group the average time before user returns.
3. In chart details select metric "Time between sessions (Avg)"
Mihail Smacinih -
Scheduling web intelligence reports from SAP EP Portal : Server Side Trust
Hello,
We have set-up SSO between SAP EP 701, SAP BI 701 and Business Object XI 3.1 to allow users to access reports without having to sign-on again as explained here :
/people/ingo.hilgefort/blog/2008/09/19/businessobjects-and-sap--configure-sap-authentication
But, we have recently been contacted by some users because when scheduling Webi Report from a link within the portal they have the following errors :
u201CA database error occured. The database error text is: Unable to connect to SAP BW server System received an expired SSO ticket. (WIS 10901) u201D
The user told us that he doesn't encounter the error when :
Login in directly to the BO Infoview (without SSO from the SAP Enterprise Portal.)
The first Webi scheduling is succesful from the portal (I suppose because the portal token is still valid)
I understand that we also have to configure the Server Side Trust between BO Enterprise server and the SAP BI7 backend as explained here , but I do not really understand its purpose
https://websmp106.sap-ag.de/~sapidb/011000358700001646962008E/XI3-1_BIP_SAP_INSTALL_EN.pdf
I've found a similar discussions here ;
Issue with SAP Single Sign-On and Scheduling Reports
I still have some questions :
If we configure the Server Side Trust between BO Enterprise server and the SAP BI7 backend .
The Portal Logon ticket will remain an issue at some point of time , does it mean tha the WeBi report job sheduling should not be perfromed from the SAP EP Portal ?
We haven't configured the Server Side Trust , yet the users told us that they are able to schedule webi report directly from the BO Infoview ? How is it posible ?
Thank you in advance for your help.
Regards.Thank you Mr Hilgefort for your detailled explanations.
I now have to provide some explanations to my managers, and to be honest , there are still some points that
are unclear to me, and it would be extremely helpful if could confirm (or not) the follwoing points.
When scheduing Webi report from the sap portal, we're getting SSO errors.
SAP provide the follwoing note explaining how to extend the validity of the J2EE token (Portal token), but this is not a long term solution, at certain point of time the ticket will expire. Webi shceduling should not be perfromed from the Portal.
Sap note 1352127 - Scheduled Webi report fails with: A database error occurred. The database error text is: Unable to connect to SAP BW server System received an expired SSO ticket
Webi Scheduling should be performed from BO Infoview. SNC should be configured between BO server and SAP BI7.0 backend.
We should Configure Server SNC as explained in the BusinessObjects XI Integration for SAP Solutions Installation and Administration Guide at Chapter "Configuring SAP for server-side trust". (1341043)
The SAP Portal is not involved here and is not an option even with the configuration of SNC/Server side trust.
thank you for your patience.
Best Regards. -
RE: Polymorphism - retrieving type information from thedatabase or how
Don,
Ok but if I was to model a real restaurant, I would then have a head chef
that can then delegate to other chefs. This head chef would have the
additional task of coordinating the completion of subservient chefs. This
does not and would not mean that the head chef is stuck (or partitioned) in
one part of the kitchen. Further a head chef would most likely also be a
chef so that he would be running around the kitchen using and interacting
with different objects to get his part of the recipe completed. Then once
all chefs have completed their part of the recipe the head chef could return
the meal.
I would also point out that it does not make sense to me to be talking about
the chef and its ability to scale. I would look that the resource limited
devices that must be used to prepare meals to see scalability. In this case
the grill, the stove and the microwave. Scalability of the restaurant is a
function of the amount of resource limited devices versus the number of
process (i.e. chefs) that need to use those devices concurrently and the
amount of time they require access to those devices. By talking about chefs
as if they are the scalability limiting factor seems to bring us back to the
notion that the chef is a manager object that is shared. And again I come
back to the question, why?
You may now think that in a real restaurant, there are only so many chefs so
why not make it a shared service? Well in a real restaurant there are only
so many of any object, but this is not a consideration in our restaurant
model. In our "virtual" restaurant hiring a chef is as easy as:
Chef = new;
And of course chefs are of zero mass so there can be a whole lot in the
kitchen. Now assuming the Grill, Stove and Microwave map to physical
objects in our computing environment, then that is the limiting factor and
are therefore partitioned. Whenever communication has to go through a
single source, then scalability breaks down. I fear that too many people
make shared objects and create communication bottlenecks where they simply
don't exist. The only place your scalability bottlenecks should exist is in
the actual resource limited objects of your computing environment. Simply
said, if something isn't a resource limited object, then why is it shared?
If anyone is not clear how to architect an application independently of the
business model, then I would suggest looking at various framework products
and reading some technical architecture white papers to get a different, and
possibly enlightening, point of view.
Mark Perreira
Sage IT Partners.
-----Original Message-----
From: Don Nelson [mailto:[email protected]]
Sent: Wednesday, June 17, 1998 9:04 AM
To: Mark Perreira
Cc: [email protected]
Subject: RE: Polymorphism - retrieving type information from the
database
Mark,
First, I completely agree about the naming. I purposely used rather
euphamistic names for these "managers", since I see many convoluted names
for common things in various applications. But that is a topic for another
thread...
Simply because there is a "manager" of some type, does not imply that it is
chained to a particular duty. However, let's look at a real life case. In
a large restaurant, you would rarely see a chef chopping carrots or serving
dishes to customers. Those are the responsibilities of the sous-chef and
the waiter. So, we see that the chef does not really follow the food
around. Why not? Because it simply doesn't scale. When scalability isn't
a problem, (the restaurant isn't that popular, for example) the chef has
some lattitude to accept more responsibility, and might even get involved
with purchasing, etc.
In the real world, the more scalable something has to be, the narrower the
responsibilities are for each of the participating members.
Don
At 12:59 AM 6/17/98 -0700, Mark Perreira wrote:
Don,
One thing that always baffles me is when should an Object get the moniker
"Manager." This practice seems to tell me a couple of things about these
objects. In general when someone makes reference to a "Manager" objectthat
it is probably a service object and probably contains no or very little
attribution. The question is why? If I am developing an object model why
am I thinking about such implementation issues.
Surely if you are trying to model cooking an egg I would not see
"SustenancePreparationManager" in your model. Using a more common term I
would still be alarmed to see "CookManager" in your model. What does the
CookManager manage? Does it manage other cooks or eggs. Maybe it shouldbe
called an EggManager, but that doesn't make sense. How about just Cook.
There that seems like the real world. And this brings me to a problem in
the analogy. Conjuring up managers in a model can sometimes make you missa
container. For example, I would say that if we wanted to model the real
world, then eggs is a specialization of ingredient that is contained by
recipe that can be given to a cook to be prepared.
I may have many cooks (objects) that can prepare recipes and my application
architecture not the object model needs to deal with how to best let those
cooks utilize the grill, stove and microwave that sits on different
partitions on my server. My cooks can move around and when they do they
take their ability to know how to cook with them. In the real world Iwould
expect a cook to use the right appliance to prepare the recipe based on its
contents. I would not chain every cook to its appliance and them make me
responsible for giving the right cook the right recipe. This is what
managers can cause. They cause the consumer of cooks to know which cookcan
prepare what recipes based on where they are chained. This then makes me
know something about cooking. And if I don't know anything about cooking I
can only image what my egg would look like if I accidentally gave therecipe
to the cook stationed at the microwave.
Ok Ok, I have seen many architectures use facades to hide the fact that I
like to chain my cooks to their appliance. But what is that. I have gone
to restaurants and I don't know what a cook facade is. If I ask themanager
to present the cook facade manager employee I would probably be met by the
bouncer employee.
So what is the answer? Well for a start keep the application architecture
out of the model. The model should stand alone in describing the
interactions required to satisfy use cases. Second find an architecture
that describes a more responsibility driven design and how that design and
can map your business object behavior to a physical implementation with
appliances and cooking rules. And lastly, don't be so quick to chain your
cooks to their appliances. Give them some control over where they cook
their recipes, after all that is what they do.
Mark Perreira
Sage IT Partners.
-----Original Message-----
From: [email protected]
[<a href="mailto:[email protected]">mailto:[email protected]]On</a> Behalf Of Don Nelson
Sent: Tuesday, June 16, 1998 2:07 PM
To: Nick Willson
Cc: [email protected]
Subject: Re: Polymorphism - retrieving type information from the
database
This thread is switching context a bit, but I would add one thought tothe
idea of encapsulating behavior. One of the advantages to OO is that it
helps us model real world behavior. In the real world, I would not askan
invoice to stuff itself into an envelope and mail itself to its
customer; I
would not ask my vehicle to fuel itself or change its own oil; I wouldnot
tell an egg carton to ask one of its eggs to fry itself. Even if these
things were physically feasible, I could list a number of reasons why I
still wouldn't want to do them. That is why we haveVehicleRepairManagers
and SustenancePreparationManagers (aka, "Mechanics" and "Cooks").
Don
At 11:28 PM 6/15/98 -0700, Nick Willson wrote:
Tim,
You've had lots of good suggestions so I hope you won't mind an attempt
at another one. The consensus seems to be for your option (1) for the
Vehicle table, and Steve's example of a GenericConstraint (taking the
place of your Vehicle) is probably how most people would go about
answering your question. I don't have much to add to that, just wanted
to offer something about where the persistence mechanism lives and how
things look to clients that depend on it.
Suppose for a moment you think about the Vehicle classes' persistence as
being just one aspect of their behavior. In addition to persistence,
you might have to implement security, or locking for concurrent access,
or caching of vehicle objects to improve performance, and of course you
want to calculate the vehicle tax and probably do other things with
Vehicles too.
You can put the persistence aspect of Vehicles into a
PersistenceObjectManager, but then the others need somewhere too. If
you use a bunch of Managers (one for security, one for locking...) then
each class's behavior is scattered across these various Manager classes,
each of which has to know about many classes. Or if you use one Manager
class, it's going to know still more, plus you are forced to implement
all the behavior in (or at least via) that manager's partition.
An alternative would be to keep all the Vehicle classes' behavior
encapsulated together, so a client always makes requests to a Vehicle,
and the Vehicle delegates the implementation of requests to a chain of
handler objects that hang off the vehicle object (a handler for
security, another for persistence, and so on).
One of the nice things about this is, the handlers can be responsible
for going to another partition (if necessary), e.g. to perform
persistence operations, or for more business-specific operations like
tax calculations. And because the handlers are smart, you don't have to
put a lot of code into service objects, the SOs can stay pretty simple.
This isn't an approach you'll see in Express, so I hope of it's of some
interest.
General wrote:
Part 1.1 Type: Plain Text (text/plain)
Encoding: quoted-printable--
Nick Willson
SCAFFOLDS Consultant,
Sage IT Partners, Inc.
(415) 392 7243 x 373
[email protected]
The Leaders in Internet Enabled Enterprise Computing
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:<a href=
"http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>>
>>>
>>>
>>
>>
============================================
Don Nelson
Regional Consulting Manager - Rocky Mountain Region
Forte Software, Inc.
Denver, CO
Phone: 303-265-7709
Corporate voice mail: 510-986-3810
aka: [email protected]
============================================
"When you deal with higher numbers, you need higher math." - Hobbes
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:<a href=
"http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>>
>>
>
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:<a href=
"http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>>
>
>
============================================
Don Nelson
Regional Consulting Manager - Rocky Mountain Region
Forte Software, Inc.
Denver, CO
Phone: 303-265-7709
Corporate voice mail: 510-986-3810
aka: [email protected]
============================================
"When you deal with higher numbers, you need higher math." - Hobbes
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:<a href=
"http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>>Don,
You are absolutely correct. But this is where I honestly think you are
missing the point. While the mail program sends the mail, my mail message
has an interface (i.e. send button) which can delegate that to the mail
program. This makes it nice and simple for me the consumer of the mail
program. It also means I can think of mailing by focusing on the interface
(i.e. the button). It would suck if every time I wanted to mail something I
had to identify the correct pop server to send it to (i.e the MailManager).
Mailing something is the collaboration of the setup information of the mail
program and my mail message. If I were to model this my mail object would
indeed have a send method that could delegate to the correct mail servers.
This is just simplicity of interface and it is a good practice in UI
development just as it is in business model development. A simpler
interface, I think we can all agree, provides for a better and quicker
understanding.
Mark Perreira
Sage IT Partners.
-----Original Message-----
From: [email protected]
[<a href="mailto:[email protected]">mailto:[email protected]]On</a> Behalf Of Don Nelson
Sent: Thursday, June 18, 1998 9:22 AM
To: Nick Willson
Cc: [email protected]
Subject: Re: Polymorphism - retrieving type information from the
database
Nick,
It turns out that your message does not, indeed send itself. Your mailing
program does that.
Don
At 11:54 PM 6/17/98 -0700, Nick Willson wrote:
Hey Don,
In the real world, no, you can't tell an invoice to put itself into anenvelope
and mail itself. You have to know about stamps and post boxes and wherethey
are located. But isn't it nice that in software you don't have to followthe
real world very closely if you don't want to?
Above the top left hand corner of this message I'm typing right now, thereis a
send button which lets me tell the message to 'stuff itself into anenvelope
and mail itself'. Why wouldn't you want to do that?
Don Nelson wrote:
This thread is switching context a bit, but I would add one thought to
the
idea of encapsulating behavior. One of the advantages to OO is that it
helps us model real world behavior. In the real world, I would not askan
invoice to stuff itself into an envelope and mail itself to its customer;I
would not ask my vehicle to fuel itself or change its own oil; I wouldnot
tell an egg carton to ask one of its eggs to fry itself. Even if these
things were physically feasible, I could list a number of reasons why I
still wouldn't want to do them. That is why we haveVehicleRepairManagers
and SustenancePreparationManagers (aka, "Mechanics" and "Cooks").
Don
At 11:28 PM 6/15/98 -0700, Nick Willson wrote:
Tim,
You've had lots of good suggestions so I hope you won't mind an attempt
at another one. The consensus seems to be for your option (1) for the
Vehicle table, and Steve's example of a GenericConstraint (taking the
place of your Vehicle) is probably how most people would go about
answering your question. I don't have much to add to that, just wanted
to offer something about where the persistence mechanism lives and how
things look to clients that depend on it.
Suppose for a moment you think about the Vehicle classes' persistence as
being just one aspect of their behavior. In addition to persistence,
you might have to implement security, or locking for concurrent access,
or caching of vehicle objects to improve performance, and of course you
want to calculate the vehicle tax and probably do other things with
Vehicles too.
You can put the persistence aspect of Vehicles into a
PersistenceObjectManager, but then the others need somewhere too. If
you use a bunch of Managers (one for security, one for locking...) then
each class's behavior is scattered across these various Manager classes,
each of which has to know about many classes. Or if you use one Manager
class, it's going to know still more, plus you are forced to implement
all the behavior in (or at least via) that manager's partition.
An alternative would be to keep all the Vehicle classes' behavior
encapsulated together, so a client always makes requests to a Vehicle,
and the Vehicle delegates the implementation of requests to a chain of
handler objects that hang off the vehicle object (a handler for
security, another for persistence, and so on).
One of the nice things about this is, the handlers can be responsible
for going to another partition (if necessary), e.g. to perform
persistence operations, or for more business-specific operations like
tax calculations. And because the handlers are smart, you don't have to
put a lot of code into service objects, the SOs can stay pretty simple.
This isn't an approach you'll see in Express, so I hope of it's of some
interest.
General wrote:
Part 1.1 Type: Plain Text (text/plain)
Encoding: quoted-printable--
Nick Willson
SCAFFOLDS Consultant,
Sage IT Partners, Inc.
(415) 392 7243 x 373
[email protected]
The Leaders in Internet Enabled Enterprise Computing
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:<a href=
"http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>>
>>>
>>>
>>
============================================
Don Nelson
Regional Consulting Manager - Rocky Mountain Region
Forte Software, Inc.
Denver, CO
Phone: 303-265-7709
Corporate voice mail: 510-986-3810
aka: [email protected]
============================================
"When you deal with higher numbers, you need higher math." - Hobbes--
Nick
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:<a href="http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>>
>
>
============================================
Don Nelson
Regional Consulting Manager - Rocky Mountain Region
Forte Software, Inc.
Denver, CO
Phone: 303-265-7709
Corporate voice mail: 510-986-3810
aka: [email protected]
============================================
"When you deal with higher numbers, you need higher math." - Hobbes
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:<a href=
"http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>>
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:<a href=
"http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>> -
Retrieval performance become poor with dynamic calc members with formulas
We are facing the retrieval performance issue on our partititon cube.
It was fine before applying the member formulas for 4 of measures and made them dynamic calc.
The retrieval time has increased from 1sec to 5 sec.
Here is the main formula on a member, and all these members are dynamic calc (having member formula)
IF (@ISCHILD ("YTD"))
IF (@ISMBR("JAN_YTD") AND @ISMBR ("Normalised"))
"Run Rate" =
(@AVG(SKIPNONE, @LIST (@CURRMBR ("Year")->"JAN_MTD",
@RANGE (@SHIFT(@CURRMBR ("Year"),-1, @LEVMBRS ("Year", 0)), @LIST("NOV_MTD","DEC_MTD")))) *
@COUNT(SKIPNONE,@RSIBLINGS(@CURRMBR ("Period")))) + "04";
ELSE
IF (@ISMBR("FEB_YTD") AND @ISMBR ("Normalised"))
"Run Rate" =
(@AVG (SKIPNONE, @RANGE (@SHIFT(@CURRMBR ("Year"),-1, @LEVMBRS ("Year", 0)),"DEC_MTD"),
@RANGE (@CURRMBR ("Year"), @LIST ("JAN_MTD", "FEB_MTD"))) *
@COUNT(SKIPNONE,@RSIBLINGS(@CURRMBR ("Period")))) + "04";
ELSE
"Run Rate"
=(@AVGRANGE(SKIPNONE,"Normalised Amount",@CURRMBRRANGE("Period",LEV,0,-14,-12))*
@COUNT(SKIPNONE,@RSIBLINGS(@CURRMBR ("Period"))))
+ "Normalised"->"04";
ENDIF;
ENDIF;
ELSE 0;
ENDIF
Period is dense
Year is dense
Measures (normalised) is dense
remaining all sparse
block size 112k
index cache to 10mb
Rertrieval buffer 70kb
dynamiccalccahe max set to 200mb
Please not that, this is partition cube, retriving data from 2 ASO, 1 BSO underline cubes.I received the following from Hyperion. I had the customer add the following line to their essbase.cfg file and it increased their performance of Analyzer retrieval from 30 seconds to 0.4 seconds. CalcReuseDynCalcBlocks FALSE This is an undocumented setting (will be documented in Essbase v6.2.3). Here is a brief explanation of this setting from development: This setting is used to turn off a method of reusing dynamically calculated values during retrievals. The method is turned on by default and can speed up retrievals when it involves a large number of dynamically calculated blocks that are each required to compute several other blocks. This may happen when there is a big hierarchy of sparse dynamic calc members. However, a large dynamic calculator cache size or a large value of CALCLOCKBLOCK may adversely affect the retrieval performance when this method is used. In such cases, the method should be turned off by setting CalcReuseDynCalcBlocks to FALSE in the essbase.cfg file. Only retrievals are affected by this setting.
-
BIP 11G - Retrieving a cursor from a function defined in a package
We had previously deployed reports in OBIP 10G....
The reports deployed in OBIP 10G used an XDO file to interact with the database and return data in XML format.
This action used to be defined in the ‘sqlStatement’ tag of the XDO file.
<dataQuery>
<sqlStatement name="Q1"> select PK_AP_GEN_REP.AP_GEN_REP('AP_CH_R101',1) FROM DUAL </sqlStatement>
</dataQuery>
For this it would invoke our package called ‘PK_AP_GEN_REP’ and pass the report specific function name ('AP_CH_R101') to it as an input parameter.
This returned a cursor.
Hereafter the resultant cursor from above would get each column mapped with those mentioned in ‘dataStructure’ tags of XDO file to form an XML file.
Currently we are trying the same in OBIP 11G.
We are creating the dataset with SQL query
The problem is that the SQL statement “select PK_AP_GEN_REP.AP_GEN_REP('AP_CH_R101',1) FROM DUAL” is not effective.
Also if the above query is fired it does not retrieve the column names to define the data structure.
Solutions tried:-
1. Forcefully defining the data-structure in XDM file created for this report. This did not work.
2. Firing a Before “Event Trigger”. This only works if the function is returning a Boolean and not a cursor.
We tried modifying the function to return a Boolean and to populate the report specific table which could be queried to fetch the records.
We defined Oracle DB Default Package = PK_AP_GEN_REP
We defined the trigger as PK_AP_GEN_REP.AP_GEN_REP('AP_CH_R101',1)
Now creating the dataset with SQL query does not return any data.
It is giving the following error:-
"XML document must have a top level element. Error processing resource 'http://iflmud5im00094:9704/xmlpserver/servlet/xdo'."
However the query fires if Oracle DB Default Package is reset to nothing...But in this case the event trigger did not fire.
If there is a wayout involving any of the above 2 steps or any other way please share the solution
Kindly let me know how I may handle this issue of Retrieving a cursor from a function defined in a package in BIP 11g.Further regarding this....
Not sure regarding performance issues of using pipelined functions
Just to share with you that instead of what you suggested I have tried out returning a 'table' type object from a PL/SQL function after CAST ing the required query to this Object.
Stretching the discussion forward, is there a way to cast a ref cursor into a plsql table.
This could absolutely fit my requirement...
Thanks in advance
Maybe you are looking for
-
Im having problems recording
-
Connecting the I-touch to home network
Hello Everyone. I'm having trouble connect my wife's i-touch to the house router which is a Linksys 2.4ghz Model wrt54G. The i-touch is version 3.1.3 (7e18) model MC086LL the wireless is enable but there are about 9 different channels (speeds) that I
-
Preset DVCPRO HD 720P to DVD SD
I just want to know the preset to burn sd dvd pal wtih dvcpro hd 720 FinalCut Source. did you think that i need to use filters. thanks
-
Display output page by page in ALV
HI all, can anyone pls tell me how to display output page by page in ALV this is urgent.. Thanks
-
HI gentlemen, When trying to install basicDemo, I got in a mesh. - Installing xdbUtility it does not create resource /sys/databaseSummary.xml because it already exists (from a first trial). - On trying to delete it from Enterprise Manager I receive O