Aggregate report
I am building a report using JSF and Hibernate. The report has an aggregate function (COUNT). How do I store this value in my object? I am using dataTable to display the result set. I try to avoid creatting new object for reporting purpose, any good solution?
For example, my query is
SELECT aCol, bCol, cCol, COUNT(*)
FROM myTable
group by aCol, bCol, cCol
my jsp:
<t:dataTable id="data" styleClass="TabForegroundColor"
headerClass="standardTable_Header" footerClass="standardTable_Header"
rowClasses="AltRows1, AltRows2"
var="bl"
value="#{mydatahandler.myModel}"
preserveSort="true">
this is really more of an hibernate question, but you want to do something like this.
Query query = session
.createQuery("select new model.reports.PastDueReport(change.requestnumber, change.requestcreationdate, change.expectedturnoverdate, " +
"change.risklevel, change.currentstatus, change.targetplatform, change.requesteruserid, change.shortdescription, " +
"change.problem, change.exception, change.fastpass) from ChangeRequest change where change.currentstatus = 'open' and change.expectedturnoverdate <= sysdate");
openChangeReport = (List) query.list();PastDueReport is a value object with the attributes you need to map to the hql statement you are creating. openChangeReport is a List of those value objects. Just insert your sql and create your mapped object.
Similar Messages
-
Windows 7 Client Operating Systems (Aggregate) reports not working
Hello All,
We are using Windows 7 Management pack in SCOM 2012.
We tried running reports in Widnows 7 Client OS (Aggregate) MP.
There is no data in the report. Do we need to enable any rules to collect data for these reports?
We need Memory related reports in this MP.
We have windows 7 agents and we can performance data in OPS console for those agents.
Thanks!!Verify that you install windows 7 management pack from below link
http://www.microsoft.com/en-us/download/details.aspx?id=15700
You can also check below link to resolve your issue
http://social.technet.microsoft.com/Forums/en-US/54efc509-cbb0-407c-b826-17694cd5f0bc/windows-7-client-operating-systems-aggregate-reports-not-showing-any-data?forum=operationsmanagerreporting
Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question, please click "Mark As Answer" -
Captivate 4 Aggregate Reporting
We have created a training using the aggragate feature in captivate. The training has multiple modules in the table of contents and contains individual quiz questions. We have published to the Connect server. The problem is that as soon as the user opens the module they are marked complete as taking the training. They should not be marked complete until they finish all modules associated with the overall aggregate project.
I think you may find that this is something you need to set in the Connect Server configuration. I don't think the Captivate module has control of how the LMS records completion. It seems like Connect is recording completion based merely on the fact that the user has accessed the module at all, rather than according to how much of it they have completed or whether they passed.
I don't personally use Connect. Can anyone else that administers a Connect server chime in here? Can you configure it to grant completion by any other criteria than accessing a module? -
Typically I do my reporting of data at the most granular level, so I'm unsure how to do the following:
I have a fact table (Table A) with summarized data as follows...
Doctor | Code | # of Cases
Smith | Heart | 20
Smith | Vascular | 30
Jones | Orthopedic | 100
etc.
I have a dimension table (Table B) that is basically a lookup table that houses all unique Codes, and their average profitability per case...
Code | Average Profitability
Heart | $300
Orthopedic | $125
Vascular | $50
etc.
I want to join table A to B, and then produce a final report that looks like this:
Doctor | Code | # of Cases | Avg Profitability | Total Opportunity (Cases x Avg Profitability)
Smith | Heart | 20 | $300 | $6,000
Smith | Vascular | 30 | $50 | $1,500
Jones | Orthopedic | 100 | $12,500
AND to build reports summarized even higher by Doctor, like this:
Smith | 50 | $7,500
Jones | 100 | $12,500
or by Code, like this:
Orthopedic | 500 | $62,500
Heart | 100 | $30,000
How do I set these 2 tables up in the RPD?I dont think you need any dimensional hierarchy for this type of requirement.
So in your first case you are grouping the doctor based on the code level so thats why you get the number cases and doctor grouped at each code.
Now at the aggregated level you dont have to group by the code. If you dont select your code you get the aggregated report.
Just create another report with this below query and I think you should get what you need
Select doctor, count(#cases), sum(Profitability) from table A, Table B
where Table A.code=Table B.code
group by doctor
Hope it helps
thanks
Prash -
Key figures in Cost elemnt based report
Dear All,
I have created a Cost Element based Report for the Project using report painter. In this same report, I want to pull a key figure(the key figure is 'remaining forecast planned costs") which is coming from cost forecasting report (Report 12CTC1 from transaction CJE0).
The issue is this 'key figure' is not available for report made in report group 6P3(report group for cost element based report). Is there anyway I can pull this particular key figure in my report.
Thanks in advance !
Regards,
Mahendra DigheAlso if you're using HR data, the employee data is time dependent, so that for each employee number there are any number of records. This may be causing the aggregate reporting condition that you mentioned. In BW it's important to recognize that the master data also can play a role in how many records are returned. The infoprovider is not the only source of additional records. For example:
Employee has three records in the time range.
Salary infoprovider has only one.
Results will contain THREE records, one for each employee record, all with the same salary record. On an aggregate basis, the salary results will be multiplied by three.
Time dependent master data can be a source of much confusion. For some HR reporting, I have created a 'most current' employee master data record that is not time dependent, and is loaded daily. I then use that for data models such as salary or other separate infoproviders. -
Key figures in cost element based report
Dear All,
I have created a Cost Element based Report for the Project using report painter. In this same report, I want to pull a key figure(the key figure is 'remaining forecast planned costs") which is coming from cost forecasting report (Report 12CTC1 from transaction CJE0).
The issue is this 'key figure' is not available for report made in report group 6P3(report group for cost element based report). Is there anyway I can pull this particular key figure in my report.
Thanks in advance !
Regards,
Mahendra DigheAlso if you're using HR data, the employee data is time dependent, so that for each employee number there are any number of records. This may be causing the aggregate reporting condition that you mentioned. In BW it's important to recognize that the master data also can play a role in how many records are returned. The infoprovider is not the only source of additional records. For example:
Employee has three records in the time range.
Salary infoprovider has only one.
Results will contain THREE records, one for each employee record, all with the same salary record. On an aggregate basis, the salary results will be multiplied by three.
Time dependent master data can be a source of much confusion. For some HR reporting, I have created a 'most current' employee master data record that is not time dependent, and is loaded daily. I then use that for data models such as salary or other separate infoproviders. -
Report showing vendors for particular material groups
I am trying to perform an analysis to determine vendors which have procured items for specific material groups. I have 64 different material groups which represent indirect materials and I want to be able to determine what vendors have been procuring these materials. I have run transaction ME2C to examine purchasing documents, however there is way too much information and the system cannot generate the list.
I was curious if there was some aggregate report that I could use in the Logistics Information System. I know the Purchasing Information System has Standard Analysis for Material Groups, however I have to drill down to each Material Group and change the breakdown by Vendor to get the list and this can be somewhat time consuming considering I will have to do this 64 times.
Any suggestions would be appreciated.
Thanks,
Donuse transaction MC$<
execute the selection
in the report choose from menu settings > characteristics display
and select either Key or Key and description
then choose from menu Material group analysis > Export > transfer to XXL -
Reporting by org hierarchy with drill down by each level of reports to...
Hello,
I'm trying to determine how to aggregate reporting by org hierarchy. Utlimately to report on opportunities at the highest level (CEO; all opportunities below CEO) then a second drill down report that shows summary by Level 2 (all those who report to the CEO), then again by Level 3 drill down which would be those who report to somone in Level 2. In all instances I want the report to show all records that ultimately roll up to that level (thus include direct reports alll of their direct and indirect reports )
Level 1
CEO $10,000,000 200 Optys
Level 2
Sales Leader 1 $ 3,000,000
Sales Leader 2 $ 2,000,000
Sales Leader 3 $ 1,500,000
Sales Leader 4 $ 3,500,000
Level 3
<ul><li>- rollup of all that report to Sales Leader 1 (and so on) aggregated by first level of direct reports ($3,000,000)
Sales Mgr 1.1 $ 1,000,000
Sales Mgr 1.2 $ 500,000
Sales Mgr 1.3 $ 750,000
Sales Mgr 1.4 $ 250,000</li>
</ul>
I'd appreicate any help you can send my way.
Thanks, AaronHi Aaron,
I have come across this and found that using the "Reports To" hierarchy and "Territory Team" hierarchy are not sufficient. I implemented this as a solution for one of our clients and it works very well:
I have modified the user entity and renamed 4 standard fields:
• Business Unit renamed to "Primary Line of Business"
• Business Unit Level 1 renamed to "BU Group"
• Business Unit Level 2 renamed to "Business Unit"
• Business Unit Level 3renamed to "Team"
Not all fields go through into analytics so I had to use these fields which are available in the Opportunity-Product History subject area. The downside is that they have to be text boxes so restrict access to who can populate these. From this you can get 4 hierarchy levels and drill from one to the next. The record owner then becomes the lowest level in your report and it can look something like this:
Level 1
Primary Line of Business
Level 2
BU Group
BU Group
Level 3
Business Unit
Business Unit
Business Unit
Level 4
Team
Team
Team
Team
Level 5
Sales Person 1
Sales Person 2
Sales Person 3
Sales Person 4
Sales Person 5
Obviously it would appear side by side in the report..
Thanks -
Hi ,
I am working on Interface Volume reporting of PI like daily, weekly and monthly, individual and aggregate reports and also interactive reporting of Interfaces, the sources are going to be databases of ABAP engine, AAE and Archive data of both the stacks.
I am looking for if someone worked on volume reporting on the Interfaces on messages are in the database and Archived. Currently we are using Performance monitoring to get reporting from Database but our retention period of the interfaces is only 2 days so we can't rely on this monitoring we wanted to run the similar reports on the Archived data as well.
My understanding is that History (SMX*HIST, ) and Archive metadata tables(ZARIXBC1) contains interfaces metadata info so we can run reports on these tables. So I am wondering if anyone worked on the same lines or any other better solution.
Thanks,
LaxmanLast I hear, the rule of thumb is to pick one direction (ingress or egress) and stick to that for configuring all the interfaces of the entire router, lest the same flow gets counted twice due to mixing ingress-and-egress as you've witnessed. Even then, if one router's all ingress or another all egress, but they both export NetFlow records to the same collector/reporting server, a flow passing through a set of neighbor interfaces on the two routers would still get double-counted. I don't know how NetFlow v9 or Flexi NetFlow resolves this issue without the IOS allowing an interface to be configured with both ingress and egress flow cache simultaneously. That, plus the NetFlow collector/analyzer needs to have the intelligence to deduplicate.
Here's a blog post that seems to suggest some NetFlow reporting sw can resolve this issue alone, working with mixed-direction NetFlow v9 exports. However, I can't ascertain if this software exists yet.
http://www.plixer.com/blog/scrutinizer/netflow-version-9-egress-vs-ingress/ -
Server Uptime Availability Reports in SCOM
Hi Friends,
Whenever i am running the Availability Report for an server by adding the computer object, it is calculating all the monitors present inside the server and giving us the average/aggregate report as the server Uptime.
Because of that we are getting wrong server availability information. We need to submit our Server availability report to our client and we commited to give 99.5% uptime.
We have used Uptime.exe tool to take up the exact uptime and it is showing the exact downtime.. The only advantage and the reason we opted to take the reports in SCOM is, scom has the ability to exclude the planned maintenance from the total as downtime.
Thanks & Regards,
Dinesh Sundaram
Please help me to take the exact report which suffice my requirement. I think we can do that with the help of SQL Reporting Services. If yes, kindly guide me to take the steps to do thatThanks & Regards,
Dinesh SundaramHi Dinesh
This is by design - "Whenever i am running the Availability Report for an server by adding the computer object, it is calculating all the monitors present inside the server and giving us the average/aggregate report as the server Uptime."
What you need to do is Add the health service watcher object rather than the computer object.
So in the availability report, when you select add object, type in a server name and search and you'll see the search results include a number of items. One of those has an icon of a pair of glasses (watcher!) - class health service watcher.
This will give you availability of the agent which is (I think) the metric that you want.
Cheers
Graham
View OpsMgr tips and tricks at http://systemcentersolutions.wordpress.com/ -
Dimension values without data in a fact table
I have an ODS system and a Data warehouse system
I have a Sales fact table in the ODS system and I have these fields:
SALES
ID_CUSTOMER (PK),
ID_MODEL (PK),
ID_TIME (PK),
SALES,
QUANT_ART,
COST
Then in some records in the fields ID_Time or ID_Model or ID_Customer I donât have values (NULL) because in the transactional systems these record donât have values (NULL).
The users want to generate aggregate reports with the Sales table...
The question is:
I have to put a âdummyâ value in the dimensions Customer, Model and Time (for example â0â) and put this value in the fact table if the dimensions fields have NULL values????
Or I have to leave the NULL values?
What is the best choice? Why?There's often some specific reason why these values don't exist, such as the record being a manual adjustment to sales (for example a journal voucher). In these cases it can be helpful to have a flag column to indicate this, so that when a user comes across a bunch of sales with a Store Name of "Unknown" or "Not Applicable" they can also look at the reason for this unusual entry.
-
Hello!
I'm having a problem implementing the DAO pattern.
Suppose that I have two database tables:
emp(id, name, sex, deptid)
dept(id, name)
If I follow the DAO pattern, I use two DAO interfaces, one for each
table, and "entity". EmployeeDAO, and DepartmentDAO.
(I'm using an abstract factory to create storage-specific DAOS)
These DAOs return instances of Employee, and Department, or lists of them. (ValueObjects).
This is all great and works very well, but suppose I want to produce the following
presentation on the web:
deptname | male | female
Dept A | 10 | 20
Dept B | 15 | 30In essense, this is a request for all the departments.
I would iterate through this list, and want to display how many
males, and how many females there are in each department.
Should this be in the DepartmentDAO, or in a separate DAO?
Or should this be put in some BusinessDelegate?
That is, DepartmentDelegate.countMales(dept);
Or should I put a method in the ValueObject Department that in turn uses the DAO to count males?
Or should I load the number of females into the valueobject when fetching it from the
database in the first place?
Or should I construct a specialized view of the department such as:
class StupidViewOfDepartment
private Department dept;
private int males;
private int females;
public StupidViewOfDepartment(Department dept, int males, int females){
public int numFemales();
return females;
public int numMales(){
return males;
}...having some class return a collection of this specialized view?
In that case, which class would that be?
A new DAO or the DepartmentDAO?
All classical examples of DAO patterns that I can find, fails to adress
other issues than just retreiving a single Employee, or a list of them.
Can someone advise me on this?You said:
My problem might be, that the data I'm asking for, is not distinct objects, business objects,
but a "new type of object" consisting of this particular information, that is
deptname, numMales, numFemales.
EXACTLY! You are querying for data that is either aggregate, a combination of various other business objects or a very large set of known business objects. In any of these cases, you probably don't want to use a vanilla DAO. Write a dedicated search DAO. Depending on your OO purity level and time horizon, you could make VO's for the search request or the results returned.
You said:
I'd like to think of this as report functionality, or aggregate reports.
I'm good at database programming, and I'm particularly good at optimization,
so if I cannot do this the good-looking way, I can always resort to brutal techniques...ehum
PERFECT! If you are great at database operations, and you know exactly how you want to optimize a given search, then give it its own DAO. The main problem with the object->relational boundary is that most cookie-cutter solutions (ala entity beans with CMP) cannot even remotely appropach the optimization level of a good database programmer. If you want to optimize a search in SQL or a stored procuedure, do that. Then have a dedicated search DAO use that funcitonality. (If you want to do it "right", make a search Factory object that will return various implementations, some may be vendor-specific or optimized, others might be generic; the Factory simply returns a search DAO interface, while specific implementations can concentrate on the task at hand. Swapping implementations with the same interface should be trivial).
- Saish
"My karma ran over your dogma." - Anon -
What is the threshold to decide whether partitioning is needed or not?
I have an oracle 10g enterprise edition but partitioning option is not purchased yet. I foresee the total amount of data by the end of year 2012 would be about 200 to 300 GB and by end of year 2013 will be 400 GB, year 2014 will be 500 GB. There may be four or five tables with about 2 million records (and these will be the biggest tables).
I have read that partitioning is definitely needed when the size of the database exceeds 500 GB. Are there any other criteria/threshold which suggests at what point in time the partitioning option needs to be recommended?Hi,
whether or not you need partitioning depends not only on your data size, but mostly on your data structure. If you can think of "units" or "chunks" that your data can be broken in, and your business needs are such that they could be satisfied by one or a few "chunks", then partitioning may be the solution for you. For example, if your main application table stores data chronologically, and you often need to wipe off data for one month, or move it to cheaper storage, or take it offline for some kind of maintenance. Or if your data is distributed more or less uniformly across 20 different locations, and your users are often interested in seeing aggregate reports for just one location, etc.
If your data doesn't have a natural partitioning key, and cannot be broken into convenient "chunks" by this key, then not only partitioning may not improve your performance, it can make it worse.
There is a common misconception about partitioning: people tend to think of partitions as a way to complement indexing to get better selectivity. E.g. you have a table with columns a (indexed) and b (not indexed), and you have a report using WHERE a=:a and b>:b and think "ok, let me range-partition my table on b, then my report will run even faster with the combined power of indexing and partition pruning". In reality, your report will be running slower, maybe even much slower, depending on how many partitions will be covered by b>:b, because instead of doing an index unique scan once, you'll be doing it once per every index partition in your partition range. Of course, global indexes don't have this problem, but they sort of cancel the advantages of partitioning (what good is being able to operate on a small chunk of data if you have to follow up by rebuilding a huge global index?).
Another common misconception is urge to partition just because a table is becoming "too big". I've seen people doing partitioning on some obscure syntetic keys only used in table joins explaining that "well I know this is not a good partitioning key but I had to partition this table, it was becoming oh so big". In best case scenario, performance won't improve, but very likely it will get worse.
Best regards,
Nikolay -
Help with configuring AP-1240AG as local authenticator for EAP-FAST client
Hi,
I am trying to configure an AP-1240AG as a local authenticator for a Windows XP client with no success. Here is a part of the AP configuration:
dot11 lab_test
authentication open eap eap_methods
authentication network-eap eap_methods
guest-mode
infrastructure-ssid
radius-server local
eapfast authority id 0102030405060708090A0B0C0D0E0F10
eapfast authority info lab
eapfast server-key primary 7 211C7F85F2A6056FB6DC70BE66090DE351
user georges nthash 7 115C41544E4A535E2072797D096466723124425253707D0901755A5B3A370F7A05
Here is the Windows XP client configuration:
Authentication: Open
Encrpytion WEP
Disable Cisco ccxV4 improvements
username: georges
password: georges
Results: The show radius local-server statistics does not show any activity for the user georges and the debug messages are showing the following:
*Mar 4 01:15:58.887: %DOT11-7-AUTH_FAILED: Station 0016.6f68.b13b Authentication failed
*Mar 4 01:16:28.914: %DOT11-7-AUTH_FAILED: Station 0016.6f68.b13b Authentication failed
*Mar 4 01:16:56.700: RADIUS/ENCODE(00001F5C):Orig. component type = DOT11
*Mar 4 01:16:56.701: RADIUS: AAA Unsupported Attr: ssid [263] 19
*Mar 4 01:16:56.701: RADIUS: [lab_test]
*Mar 4 01:16:56.701: RADIUS: 65 [e]
*Mar 4 01:16:56.701: RADIUS: AAA Unsupported Attr: interface [156] 4
*Mar 4 01:16:56.701: RADIUS: 38 32 [82]
*Mar 4 01:16:56.701: RADIUS(00001F5C): Storing nasport 8275 in rad_db
*Mar 4 01:16:56.702: RADIUS(00001F5C): Config NAS IP: 10.5.104.22
*Mar 4 01:16:56.702: RADIUS/ENCODE(00001F5C): acct_session_id: 8026
*Mar 4 01:16:56.702: RADIUS(00001F5C): sending
*Mar 4 01:16:56.702: RADIUS/DECODE: parse response no app start; FAIL
*Mar 4 01:16:56.702: RADIUS/DECODE: parse response; FAIL
It seems that the radius packet that the AP receive is not what is expected. Do not know if the problem is with the client or with the AP configuration. Try many things but running out of ideas. Any suggestions would be welcome
ThanksHi Stephen,
I do not want to create a workgroup bridge, just want to have the wireless radio bridge with the Ethernet port. I will remove the infrastructure command.
Thanks for your help
Stephane
Here is the complete configuration:
version 12.3
no service pad
service timestamps debug datetime msec
service timestamps log datetime msec
service password-encryption
hostname Lab
ip subnet-zero
aaa new-model
aaa group server radius rad_eap
aaa group server radius rad_mac
aaa group server radius rad_admin
aaa group server tacacs+ tac_admin
aaa group server radius rad_pmip
aaa group server radius dummy
aaa authentication login eap_methods group rad_eap
aaa authentication login mac_methods local
aaa authorization exec default local
aaa accounting network acct_methods start-stop group rad_acct
aaa session-id common
dot11 lab_test
authentication open eap eap_methods
authentication network-eap eap_methods
guest-mode
infrastructure-ssid
power inline negotiation prestandard source
bridge irb
interface Dot11Radio0
no ip address
no ip route-cache
ssid lab_test
traffic-metrics aggregate-report
speed basic-54.0
no power client local
channel 2462
station-role root
antenna receive right
antenna transmit right
no dot11 extension aironet
bridge-group 1
bridge-group 1 block-unknown-source
no bridge-group 1 source-learning
no bridge-group 1 unicast-flooding
bridge-group 1 spanning-disabled
interface Dot11Radio1
no ip address
no ip route-cache
shutdown
dfs band 3 block
speed basic-6.0 9.0 basic-12.0 18.0 basic-24.0 36.0 48.0 54.0
channel dfs
station-role root
no dot11 extension aironet
bridge-group 1
bridge-group 1 subscriber-loop-control
bridge-group 1 block-unknown-source
no bridge-group 1 source-learning
no bridge-group 1 unicast-flooding
bridge-group 1 spanning-disabled
interface FastEthernet0
no ip address
no ip route-cache
duplex auto
speed auto
bridge-group 1
no bridge-group 1 source-learning
bridge-group 1 spanning-disabled
hold-queue 160 in
interface BVI1
ip address 10.5.104.22 255.255.255.0
ip default-gateway 10.5.104.254
ip http server
no ip http secure-server
ip http help-path http://www.cisco.com/warp/public/779/smbiz/prodconfig/help/eag
ip radius source-interface BVI1
radius-server local
eapfast authority id 000102030405060708090A0B0C0D0E0F
eapfast authority info LAB
eapfast server-key primary 7 C7AC67E296DF3437EB018F73BE00D822B8
user georges nthash 7 14424A5A555C72790070616C03445446212202080A75705F513942017A76057007
control-plane
bridge 1 route ip
line con 0
line vty 0 4
end -
Cisco 1142 Wireless access point intermittently will not authenticate
Hi all,
We have a Cisco 1142 standalone access point, and from time to time I will come into the office and it will not authenticate any users to either our guest or corporate networks. I then have to go in and reboot the access point. After that, it begins to work. Any advice? Here's my configuration below:
Current configuration : 6450 bytes
version 12.4
no service pad
service timestamps debug datetime msec
service timestamps log datetime msec
service password-encryption
hostname cisco-chiap01
logging monitor errors
enable secret 5 $1$fsD8$CU42/3/Up5AAlL4hQWvvg0
aaa new-model
aaa group server radius rad_eap
server 172.17.16.12 auth-port 1645 acct-port 1646
server 172.17.21.10 auth-port 1812 acct-port 1813
aaa group server radius rad_mac
aaa group server radius rad_acct
aaa group server radius rad_admin
aaa group server tacacs+ tac_admin
aaa group server radius rad_pmip
aaa group server radius dummy
server 172.17.21.10 auth-port 1812 acct-port 1813
aaa group server radius rad_eap2
server 172.17.16.12 auth-port 1645 acct-port 1646
server 172.17.21.10 auth-port 1812 acct-port 1813
aaa authentication login eap_methods group rad_eap
aaa authentication login mac_methods local
aaa authentication login eap_methods2 group rad_eap2
aaa authorization exec default local
aaa accounting network acct_methods start-stop group rad_acct
aaa session-id common
login on-failure log
login on-success log
dot11 syslog
dot11 vlan-name Admin vlan 100
dot11 vlan-name DevNetwork vlan 20
dot11 vlan-name Guest vlan 150
dot11 vlan-name Network vlan 16
dot11 ssid DevNetwork
vlan 20
authentication open eap eap_methods2
authentication network-eap eap_methods2
authentication key-management wpa version 2
dot11 ssid Guest
vlan 150
authentication open
authentication key-management wpa version 2
guest-mode
mbssid guest-mode
wpa-psk ascii 7 142407060101380B013A3A2670435642
information-element ssidl advertisement
dot11 ssid Network
vlan 16
authentication open eap eap_methods2
authentication network-eap eap_methods2
authentication key-management wpa version 2
username monkeyman privilege 15 secret 5 $1$ZZ7C$rqimu2FNONdfeacMNGAD/.
bridge irb
interface Dot11Radio0
no ip address
ip helper-address 172.17.19.10
no ip route-cache
encryption mode ciphers aes-ccm
encryption vlan 16 mode ciphers aes-ccm
encryption vlan 150 mode ciphers aes-ccm
encryption vlan 20 mode ciphers aes-ccm
ssid DevNetwork
ssid Guest
ssid Network
antenna gain 0
parent timeout 120
speed 5.5 11.0 basic-6.0 9.0 12.0 36.0 48.0 54.0
packet retries 128 drop-packet
channel 2462
station-role root
rts threshold 512
rts retries 128
interface Dot11Radio0.11
encapsulation dot1Q 11
no ip route-cache
interface Dot11Radio0.16
encapsulation dot1Q 16 native
no ip route-cache
bridge-group 1
bridge-group 1 subscriber-loop-control
bridge-group 1 block-unknown-source
no bridge-group 1 source-learning
no bridge-group 1 unicast-flooding
bridge-group 1 spanning-disabled
interface Dot11Radio0.20
encapsulation dot1Q 20
no ip route-cache
bridge-group 20
bridge-group 20 subscriber-loop-control
bridge-group 20 block-unknown-source
no bridge-group 20 source-learning
no bridge-group 20 unicast-flooding
bridge-group 20 spanning-disabled
interface Dot11Radio0.150
encapsulation dot1Q 150
no ip route-cache
bridge-group 150
bridge-group 150 subscriber-loop-control
bridge-group 150 block-unknown-source
no bridge-group 150 source-learning
no bridge-group 150 unicast-flooding
bridge-group 150 spanning-disabled
interface Dot11Radio1
no ip address
ip helper-address 172.17.19.10
no ip route-cache
encryption vlan 16 mode ciphers aes-ccm
encryption vlan 150 mode ciphers aes-ccm
encryption vlan 20 mode ciphers aes-ccm
ssid DevNetwork
ssid Guest
ssid Network
antenna gain 0
traffic-metrics aggregate-report
dfs band 3 block
mbssid
parent timeout 120
speed 6.0 12.0 basic-24.0 36.0 48.0 54.0
channel width 40-above
channel dfs
station-role root access-point
interface Dot11Radio1.11
encapsulation dot1Q 11
no ip route-cache
interface Dot11Radio1.16
encapsulation dot1Q 16 native
no ip route-cache
bridge-group 1
bridge-group 1 subscriber-loop-control
bridge-group 1 block-unknown-source
no bridge-group 1 source-learning
no bridge-group 1 unicast-flooding
bridge-group 1 spanning-disabled
interface Dot11Radio1.20
encapsulation dot1Q 20
no ip route-cache
bridge-group 20
bridge-group 20 subscriber-loop-control
bridge-group 20 block-unknown-source
no bridge-group 20 source-learning
no bridge-group 20 unicast-flooding
bridge-group 20 spanning-disabled
interface Dot11Radio1.150
encapsulation dot1Q 150
no ip route-cache
bridge-group 150
bridge-group 150 subscriber-loop-control
bridge-group 150 block-unknown-source
no bridge-group 150 source-learning
no bridge-group 150 unicast-flooding
bridge-group 150 spanning-disabled
interface GigabitEthernet0
no ip address
no ip route-cache
duplex auto
speed auto
no keepalive
interface GigabitEthernet0.11
encapsulation dot1Q 11
no ip route-cache
interface GigabitEthernet0.16
encapsulation dot1Q 16 native
no ip route-cache
bridge-group 1
no bridge-group 1 source-learning
bridge-group 1 spanning-disabled
interface GigabitEthernet0.20
encapsulation dot1Q 20
no ip route-cache
bridge-group 20
no bridge-group 20 source-learning
bridge-group 20 spanning-disabled
interface GigabitEthernet0.100
encapsulation dot1Q 100
ip address 192.168.100.3 255.255.255.0
no ip route-cache
bridge-group 100
no bridge-group 100 source-learning
bridge-group 100 spanning-disabled
interface GigabitEthernet0.150
encapsulation dot1Q 150
no ip route-cache
bridge-group 150
no bridge-group 150 source-learning
bridge-group 150 spanning-disabled
interface BVI1
ip address 172.17.16.251 255.255.255.0
no ip route-cache
ip http server
no ip http secure-server
ip http help-path http://www.cisco.com/warp/public/779/smbiz/prodconfig/help/eag
ip radius source-interface GigabitEthernet0
access-list 1 permit 172.17.16.1
access-list 1 remark Admin network access
access-list 1 permit 192.168.100.0 0.0.0.255
radius-server attribute 32 include-in-access-req format %h
radius-server host 172.17.21.10 auth-port 1812 acct-port 1813 key 7 047958071C3561410D4A44
radius-server host 172.17.16.12 auth-port 1645 acct-port 1646 key 7 08045E471A48574446
radius-server host 172.17.21.10 auth-port 1645 acct-port 1646 key 7 1320051B185D56797F
radius-server timeout 15
radius-server vsa send accounting
bridge 1 route ip
line con 0
line vty 0 4
access-class 1 in
endWhen the issue occurs does that affect both 2.4GHz & 5GHz devices ? I would see which band operating devices affected.
I noticed you have set CH11 under Radio 0 statically. I would prefer to configure it as below so AP can change the channel depend on the environment.
int d0
channel least-congested
HTH
Rasika
**** Pls rate all useful responses ****
Maybe you are looking for
-
External hard drive corrupted after trying to fix it, in macbook Disk utilities
hello, After fixing my macbook internal drive with disk utilities, i put to verify my WD external hard drive,that acuse me some errors. Well i don´t no why but i put the disk utility run for de WD external drive. A few time later , shows up a message
-
Hello I'm new in InDesign scripting in javascript. I would like to know if there is a way to get the position of the last character in a Text Frame that is overflowed? I need this because I want to calculate how many characters are hidden (overflowin
-
I am on the second HP Officejet Pro 8500A Plus in the last three months. The same issue is occuring with this one that occured with the first one. When printing, it may print 2 pages and it may print a half page. I can not clear my job queue witho
-
Generated Webservice proxy: Maximimum size of byte array
Hi, We have generated a webservice proxy with JDeveloper (version 10.1.3.4.0) from a WSDL supplied by a .Net webservice. One of the Webservice returns a byte array. We are facing a problem when the size of the returned byte array exceeds the limit of
-
Close a plant for some days.
Hello, I want to close a plant for some days. What is the best way to do that? The easiest way is to update the factory calendar. But this is customizing and transport. The users can not do that. To modify the available capacity in every work center