studyroom
kulpg ngantuk astagfirulah
pul
kampus kulsiang ngantukagain astagfirulah
pul
bhslps alhamdulilah
toko teh
kampuskelasmalambhschina
allahuakbar
just put any comment on my blog
studyroom
kulpg ngantuk astagfirulah
pul
kampus kulsiang ngantukagain astagfirulah
pul
bhslps alhamdulilah
toko teh
kampuskelasmalambhschina
allahuakbar
Here's a detailed measurement instrument for assessing EFL students' writing performance in a writing class, adapted from the TOEFL iBT Independent Writing Rubric (0–5 scale). This version is tailored for classroom use and provides clear criteria, descriptors, and scoring guidelines to help both instructors and students understand what is expected.
Target: EFL university students in a writing class
Task Type: Independent essay writing (opinion or argument-based)
Length: ~300-350 words
Time: 30–45 minutes (in-class) or untimed (home assignment)
| Criteria | Score 5 – Excellent | Score 4 – Good | Score 3 – Fair | Score 2 – Limited | Score 1 – Very Weak | Score 0 – No Response / Off-topic |
|---|---|---|---|---|---|---|
| 1. Task Achievement / Development | Fully addresses the topic; ideas are fully developed with clear and relevant examples. | Addresses the topic well; some ideas may not be fully developed or lack specific examples. | Partially addresses the topic; development is limited and may include irrelevant details. | Poorly addresses the topic; ideas are unclear or minimally developed. | Barely addresses the topic; lacks relevant ideas or examples. | No response or completely off-topic. |
| 2. Organization / Coherence | Essay is well-organized; clear progression of ideas; effective use of cohesive devices. | Generally well-organized; ideas mostly clear; adequate use of transitions. | Some organization present; ideas may be repetitive or loosely connected. | Organization is weak; transitions are lacking or confusing; ideas do not flow logically. | No clear organization; ideas are disjointed or jumbled. | No structure; incoherent or not understandable. |
| 3. Language Use / Grammar | Consistently accurate grammar; wide range of sentence structures and complexity. | Minor grammatical errors; variety of sentence structures attempted. | Noticeable grammatical issues; limited variety in sentence construction. | Frequent grammatical errors; sentence structure is basic and repetitive. | Persistent grammatical problems that interfere with meaning. | Language errors so severe that writing is incomprehensible. |
| 4. Vocabulary Use | Wide and precise vocabulary; word choice is appropriate and enhances meaning. | Adequate vocabulary; minor misuse of words but meaning is clear. | Limited vocabulary; some awkward or incorrect word choices. | Repetitive or basic vocabulary; frequent word choice errors that obscure meaning. | Very limited vocabulary; frequent misuse of words that confuse the reader. | Vocabulary is too poor or unclear to understand. |
| 5. Mechanics / Conventions | Consistent control of spelling, punctuation, and capitalization. | Occasional errors in mechanics, but they do not interfere with understanding. | Some mechanical errors that may distract but do not block communication. | Frequent mechanical errors that interfere with reading. | Persistent mechanical errors that significantly hinder understanding. | Errors so numerous that the writing cannot be evaluated. |
Total Score: 25 points (5 criteria × max 5 points each)
Grade Conversion (Optional):
23–25 = A
20–22 = B+
17–19 = B
14–16 = C+
11–13 = C
Below 11 = D/F
Use this rubric as both a diagnostic (initial writing) and formative (throughout the course) tool.
Can be adapted for peer or self-assessment.
Consider supplementing with written feedback for qualitative insight.
Here’s a detailed measurement instrument designed for assessing critical thinking skills in an EFL writing class, based on the Watson-Glaser Critical Thinking Appraisal (WGCTA). This instrument has been contextualized for writing activities and adapted for use in an Indonesian EFL university setting.
This instrument assesses students’ critical thinking skills in writing through five subskills based on WGCTA:
Inference
Recognition of Assumptions
Deduction
Interpretation
Evaluation of Arguments
Each subskill is integrated into writing-based tasks and evaluated using performance rubrics tailored to EFL contexts.
Each writing prompt is designed to elicit specific critical thinking subskills. Students’ written responses will be assessed using analytic rubrics provided below.
Prompt: Read a short article or data report (e.g., on climate change, technology in education, or a cultural issue). Based on the information, write a short argumentative paragraph stating your conclusion and the reasons for it.
Purpose: To assess the student’s ability to draw logical conclusions from given data or text.
Prompt: Respond to an opinionated text (e.g., “Studying abroad guarantees better job opportunities”). Write a critical response analyzing any underlying assumptions the author makes.
Purpose: To evaluate whether students can identify unstated premises or beliefs.
Prompt: You are given a set of premises (e.g., "All effective essays have clear thesis statements. This essay does not have a thesis statement..."). Write whether the conclusion follows logically and explain your reasoning.
Purpose: To assess students’ ability to apply formal reasoning and identify valid conclusions.
Prompt: Read a paragraph containing ambiguous or conflicting statements. Write a short analysis interpreting what the writer most likely intended, using evidence from the text.
Purpose: To assess the student’s ability to weigh evidence and determine meaning.
Prompt: Write an essay responding to an argumentative claim (e.g., “Social media improves students’ writing skills”). Evaluate the quality of evidence and reasoning in the opposing view.
Purpose: To evaluate how students judge the strength and relevance of arguments.
Each subskill is scored on a 4-point analytic scale, adapted for EFL writing proficiency.
| Subskill | Excellent (4) | Good (3) | Fair (2) | Needs Improvement (1) |
|---|---|---|---|---|
| Inference | Draws insightful, logically sound conclusions well-supported by evidence | Draws generally sound conclusions with adequate support | Conclusion is somewhat logical; support is limited | Conclusion lacks logic or is unsupported |
| Recognition of Assumptions | Clearly identifies implicit assumptions and explains their impact | Identifies some assumptions with general explanation | Assumptions partially identified or misinterpreted | Fails to identify or analyze assumptions |
| Deduction | Applies logical reasoning accurately; conclusions follow from premises | Mostly accurate deduction with minor errors | Some flawed reasoning or logical gaps | Inaccurate or illogical deductions |
| Interpretation | Interprets complex or ambiguous content accurately and thoughtfully | Generally accurate interpretation with some insights | Limited interpretation; some misunderstandings | Misinterprets or overlooks key information |
| Evaluation of Arguments | Evaluates argument strength critically and articulates counterpoints effectively | Evaluates argument with some critical insight | Limited critical evaluation or weak counterpoints | Fails to critically evaluate arguments |
Each subskill is scored individually (max score = 20).
Results can be used for:
Diagnostic feedback
Tracking progress across a semester
Research on the relationship between writing instruction and critical thinking
Here’s a detailed measurement instrument based on the Second Language Writing Anxiety Inventory (SLWAI) by Cheng (2004), tailored for use in EFL Writing Classes. This includes the instrument structure, dimensions, item list, and scoring system, with additional explanation for use in an Indonesian EFL university context.
To measure second language writing anxiety among EFL university students, especially in writing classes that apply Inquiry-Based Learning (IBL).
Number of Items: 22
Format: 5-point Likert scale
Dimensions:
Somatic Anxiety – physiological effects (e.g., tension, nervousness)
Cognitive Anxiety – negative expectations, worry, fear of evaluation
Avoidance Behavior – tendency to avoid writing in L2
| Scale | Description |
|---|---|
| 1 | Strongly Disagree |
| 2 | Disagree |
| 3 | Neutral |
| 4 | Agree |
| 5 | Strongly Agree |
| Dimension | Item Numbers | Number of Items |
|---|---|---|
| Somatic Anxiety | 1, 4, 7, 10, 13, 16, 19 | 7 |
| Cognitive Anxiety | 2, 5, 8, 11, 14, 17, 20 | 7 |
| Avoidance Behavior | 3, 6, 9, 12, 15, 18, 21, 22 | 8 |
I feel my heart pounding when I write in English.
My hands tremble when I write assignments in English.
I often feel physically tense when I have to write in English.
I perspire when I write in English under time pressure.
I experience a lot of bodily discomfort when writing in English.
I get nervous when my English writing is being read by others.
I feel overwhelmed by physical stress when I write in English.
I worry about making grammatical errors when writing in English.
I fear that my English writing is not good enough for academic standards.
I am afraid that my writing will be judged negatively by the teacher.
I worry a lot before starting an English writing task.
I often doubt my ideas when writing in English.
I think others can write in English better than I can.
I constantly think that I will fail in English writing tasks.
I avoid writing in English whenever possible.
I postpone English writing assignments until the last minute.
I try to avoid classes where I have to do a lot of writing in English.
I prefer multiple-choice exams over essay writing in English.
I often skip English writing tasks.
I use translation tools to avoid writing directly in English.
I rarely revise my English writing because I feel it won’t help.
I choose topics that require less writing in English.
Total Score Range: 22 (minimum) – 110 (maximum)
Interpretation:
22–43: Low writing anxiety
44–76: Moderate writing anxiety
77–110: High writing anxiety
Subscale Scores:
Each subscale can be scored separately to identify the dominant type of anxiety.
Administer at the start and/or end of the semester to assess changes.
Use results to tailor instruction and support (e.g., addressing cognitive stress or increasing scaffolding for those with high avoidance).
Here's a detailed measurement instrument designed to assess motivation in an EFL writing class, based on Dörnyei’s L2 Motivational Self System (L2MSS). This instrument can be used for university-level students and includes components of:
Ideal L2 Self
Ought-to L2 Self
L2 Learning Experience
— tailored specifically to a writing class context.
Format:
5-point Likert scale (1 = Strongly Disagree, 5 = Strongly Agree)
Self-administered questionnaire
Estimated completion time: 10–15 minutes
Sections: Demographics + L2MSS subscales
(Short and general)
Age:
Gender:
Major:
Years learning English:
Previous English writing instruction:
English proficiency level (self-rated): [ ] Low [ ] Medium [ ] High
These items explore the learner’s image of their ideal English self, especially in the context of writing.
I imagine myself writing fluently in English in the future.
I want to become someone who can write strong academic texts in English.
I can picture myself writing emails or reports in English for work or study.
I hope to publish something in English one day.
I feel excited when I think about writing in English like a native speaker.
I often think about improving my writing skills to become the writer I want to be.
Writing well in English is part of the person I want to become.
My ideal future self includes being confident in academic English writing.
These items examine the learner’s sense of duty, obligation, or external expectations tied to learning English writing.
People important to me expect me to write well in English.
I feel I must write well in English to avoid letting others down.
My family wants me to succeed in English writing classes.
Employers will expect me to have good English writing skills.
I think society values people who can write effectively in English.
I work hard on my writing because I don’t want to disappoint my lecturers.
I would feel guilty if I didn’t try hard to improve my English writing.
These items reflect the learner’s present experience and attitudes toward the English writing class and its environment.
I enjoy the activities we do in my English writing class.
The writing tasks in this class are meaningful to me.
I feel supported by my writing instructor.
I find the feedback I get on my writing helpful and motivating.
I like the topics we write about in class.
The learning environment in this writing class encourages me to do my best.
I feel confident when I complete writing assignments.
Working with classmates during writing activities motivates me.
The way writing is taught in this class matches my learning style.
I feel more interested in writing in English because of this class.
What motivates you the most to write in English?
How do you see your English writing skills in the future?
What challenges do you face in this writing class?
Ideal L2 Self: Mean score of 8 items
Ought-to L2 Self: Mean score of 7 items
L2 Learning Experience: Mean score of 10 items
Higher mean = stronger endorsement of that motivational component.
Several links in the PAQ application process are susceptible to challenges and deviations:
Questionnaire Comprehension by Respondents: This is a major hurdle. The PAQ uses fairly technical and standardized language describing job elements. Employees with lower educational levels or those for whom English (if that's the language of the questionnaire) is a second language may struggle to accurately interpret the questions and provide meaningful responses. This can lead to inaccurate data and skewed results.
Subjectivity and Rater Bias: Even with clear instructions, the interpretation of the PAQ items can be subjective. Different raters (job incumbents, supervisors, analysts) might perceive and evaluate the same job tasks differently based on their individual experiences, perspectives, and biases. This can introduce systematic error into the data.
Respondent Motivation and Engagement: If employees do not understand the purpose of the PAQ or feel it is a burden, they may not put sufficient effort into completing it accurately. This can lead to careless or rushed responses, impacting the reliability of the data.
Defining the "Job": Clearly defining the boundaries of a "job" can be challenging, especially in organizations with fluid roles or team-based structures. If the scope of the job is not well-defined, respondents may include tasks that fall outside their core responsibilities or omit essential duties.
Time Lag and Job Evolution: Jobs are not static. The PAQ captures a snapshot in time. If there's a significant time lag between data collection and analysis, or if the job has evolved since the questionnaire was administered, the results may not accurately reflect the current realities of the role.
Organizational Culture and Trust: In organizations with low trust or a history of using job analysis for potentially negative purposes (e.g., layoffs), employees may be hesitant to provide honest and comprehensive information.
To ensure the validity and reliability of pay analysis results using the PAQ, researchers and organizations should take the following steps:
Addressing Language Complexity:
Selecting Representative Respondents to Avoid Biased Results:
Warnings of Socioeconomic, Age, and Educational Differences:
The socioeconomic, age, and educational differences observed in a study using the PAQ have significant implications for the interpretation and application of the job analysis results:
To address these warnings, researchers and organizations should:
By proactively addressing these potential challenges and considering the implications of demographic differences, organizations can significantly enhance the validity and reliability of their PAQ-based job analysis and ensure fairer and more equitable pay practices.
The PAQ is a structured job analysis instrument containing 194 job elements organized into six major dimensions:
Here are its advantages as a standardized tool:
The generalizability of the PAQ can be effective in contexts where:
Despite its advantages, the PAQ's standardized nature can fail to adequately capture the uniqueness or details of a particular position in several situations:
It seems there might be a slight misunderstanding in the last part of your question, as it asks about adapting or supplementing pay to improve its applicability to job analysis. Pay is typically an outcome of job analysis and job evaluation, not a tool used for job analysis itself (like the PAQ).
However, if you meant to ask whether job analysis outcomes, including pay data, can be supplemented or adapted to provide a wider understanding of job differences, the answer is a resounding yes.
Here's how job analysis information, including pay, can be supplemented for a broader understanding:
In summary, while the PAQ offers a standardized and quantifiable approach to job analysis with benefits for broad comparisons, its standardized nature can limit its ability to capture the unique details of all jobs, especially high-level, highly specialized, or rapidly evolving roles. Combining the PAQ with other job analysis methods and qualitative data provides a more comprehensive and nuanced understanding of job requirements and can lead to more effective HR practices, including compensation management.
Building Trust and Rapport (Crucial in this context):
Enhancing the Survey Itself:
Optimizing Distribution and Follow-Up (Adapting to the Indonesian Context):
Enhancing Validity: